{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Tutorial 3: Contributing Experimentally Validated Simulation Data to SQuADDS\n", "\n", "In this tutorial, we will go over the basics of contributing simulation data to the SQuADDS project. We will cover the following topics:\n", "\n", "0. [Contribution Information Setup](#setup)\n", "1. [Understanding the terminology and database structure](#structure)\n", "2. [Contributing to an existing dataset configuration](#existing)\n", "3. [Creating new dataset configuration](#creation)\n", "\n", "**If you are interested in contributing measured device data, please refer to [Tutorial 4](https://lfl-lab.github.io/SQuADDS/source/tutorials/Tutorial_4_Contributing_Measured_Data.html)**\n", "\n", "---" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "%load_ext autoreload\n", "%autoreload 2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Contribution Information Setup\n", "\n", "In order to contribute to SQuADDS, you will need to provide some information about yourself. This information will be used to track your contributions and to give you credit for your work. You can provide this information by updating the following variables in the `.env` file in the root directory of the repository:\n", "\n", "```\n", "GROUP_NAME = \"\"\n", "PI_NAME = \"\"\n", "INSTITUTION = \"\"\n", "USER_NAME = \"\"\n", "CONTRIB_MISC = \"\"\n", "```\n", "\n", "where `GROUP_NAME` is the name of your research group, `PI_NAME` is the name of your PI, `INSTITUTION` is the name of your institution, `USER_NAME` is your name, and `CONTRIB_MISC` is any other information you would like to provide about your contributions (e.g. bibTex citation, paper link, etc).\n", "\n", "Alternatively, you can provide this information by executing the following cell.\n" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "from squadds.database.utils import *" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Contributor information updated in .env file (c:\\Users\\PowerAdmin.WIN-NQ8Q8E6B720\\.conda\\envs\\qiskit_metal\\Lib\\site-packages/.env).\n" ] } ], "source": [ "create_contributor_info()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Also ensure the `HUGGINGFACE_API_KEY` is also set. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from squadds.core.utils import set_huggingface_api_key\n", "\n", "set_huggingface_api_key()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Later in the tutorial, we introduce some functionalities that require a GitHub token. If you do not have a GitHub token, you can create one by following the instructions [here](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token). Create the PAT (Personal Access Token) with **GitHub (with `repo` scopes) and save the token as `GITHUB_TOKEN` in the `.env` file located at the root of the project**. \n", "\n", "Alternatively, you can execute the following cell to set the `GITHUB_TOKEN`" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from squadds.core.utils import set_github_token\n", "\n", "set_github_token()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The last thing you would need to do is to add your public SSH key to your HuggingFace account ([https://huggingface.co/settings/keys](https://huggingface.co/settings/keys))." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Understanding the terminology and database structure" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### HuggingFace\n", "\n", "[HuggingFace](https://huggingface.co/) stands at the forefront of the AI revolution, offering a dynamic collaboration platform for the machine learning community. Renowned for hosting an array of open-source machine learning libraries and tools, Hugging Face Hub serves as a central repository where individuals can share, explore, and innovate with ML technologies. The platform is dedicated to fostering an environment of learning, collaboration, and ethical AI, bringing together a rapidly expanding community of ML engineers, scientists, and enthusiasts.\n", "\n", "In our pursuit to enhance the versatility and utility of SQuADDS for quantum hardware developers and machine learning researchers, we have chosen to host our database on the HuggingFace platform. This strategic decision leverages HuggingFace's capability to support and facilitate research with machine learning models, aligning with methodologies outlined in various references. By making the SQuADDS database readily accessible on this platform, we aim to contribute to the development of cutting-edge Electronic Design Automation (EDA) tools. Our goal is to replicate the transformative impact witnessed in the semiconductor industry, now in the realm of superconducting quantum hardware.\n", "\n", "Key to our choice of HuggingFace is its [datasets](https://huggingface.co/datasets) library, which provides a unified interface for accessing a wide range of datasets. This feature is integral to SQuADDS, offering a streamlined and cohesive interface to our database. The decentralized nature of HuggingFace datasets significantly enhances community-driven development and access, a functionality that can be challenging to implement with traditional data storage platforms. This aspect of HuggingFace aligns perfectly with our vision for SQuADDS, enabling us to foster a collaborative and open environment for innovation in quantum technology." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Datasets & Configurations\n", "\n", "As seen in [Tutorial 1](https://lfl-lab.github.io/SQuADDS/source/tutorials/Tutorial-1_Getting_Started_with_SQuADDS.html#Accessing-the-SQuADDS-Database-using-the-HuggingFace-API) we have organized the SQuADDS database into datasets and configurations. Let's quickly review about these two concepts and how they are used in SQuADDS." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Each configuration in the dataset is uniquely identified by their `config`. For the SQuADDS Database, the `config` string is created in the following format:\n", "\n", "```python\n", "config = f\"{component}_{component_name}_{data_type}\"\n", "```\n", "\n", "where `component` is the name of the component, `component_name` is the name of the component (in Qiskit Metal), and `data_type` is the type of simulation data that has been contributed. \n", "\n", "This structured approach ensures that users can query specific parts of the dataset relevant to their work, such as a particular type of qubit design or simulation results. This API abstraction allows for more complex queries and operations on the data, facilitating a more efficient workflow for researchers and developers.\n", "\n", "Lets check what the `config` string looks like for our database:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "['qubit-TransmonCross-cap_matrix', 'cavity_claw-RouteMeander-eigenmode', 'coupler-NCap-cap_matrix']\n" ] } ], "source": [ "from datasets import get_dataset_config_names\n", "\n", "configs = get_dataset_config_names(\"SQuADDS/SQuADDS_DB\")\n", "print(configs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can now access the database using the `config` string. For example, if you want to access the `qubit-TransmonCross-cap_matrix` configuration, you can do so by executing the following cell:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "d7f86d1524384c9294e4575982170a6f", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading data files: 0%| | 0/1 [00:00Database Schema\n", "\n", "Each contributed entry to SQuADDS must **AT LEAST** have the following fields. One can add as many more supplementary fields as one wants.\n", "\n", "```json\n", "{\n", " \"design\":{\n", " \"design_tool\": design_tool_name,\n", " \"design_options\": design_options,\n", " },\n", " \"sim_options\":{\n", " \"setup\": sim_setup_options,\n", " \"simulator\": simulator_name,\n", " },\n", " \"sim_results\":{\n", " \"result1\": sim_result1,\n", " \"result1_unit\": unit1,\n", " \"result2\": sim_result2,\n", " \"result2_unit\": unit2,\n", " },\n", " \"contributor\":{\n", " \"group\": group_name,\n", " \"PI\": pi_name,\n", " \"institution\": institution,\n", " \"uploader\": user_name,\n", " \"misc\": contrib_misc,\n", " \"date_created\": \"YYYY-MM-DD-HHMMSS\",\n", " },\n", "}\n", "```\n", "\n", "If all the `sim_results` has the same units you can just use a `\"units\":units` field instead of repeating the unit for each result. \n", "\n", "**Note:** The `\"contributor\"` field is automatically added by the SQuADDS API when you upload your dataset. You do not need to add this field yourself." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lets look at the schema for the `qubit-TransmonCross-cap_matrix` configuration that used `qiskit-metal` as the design tool and `Ansys HFSS` as the simulation engine." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a98b52df08b94d1099f4f66f9416e3c9", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading readme: 0%| | 0.00/2.25k [00:00Contributing to an existing configuration\n", "\n", "### Single Entry Contribution:\n", "\n", "Let's revisit [Tutorial 2](https://lfl-lab.github.io/SQuADDS/source/tutorials/Tutorial-2_Simulate_interpolated_designs.html#Simulate-the-Target-Design) where we simulated a novel `TransmonCross` qubit design. We will now learn how to contribute this design to the SQuADDS database." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have provided a simple API for contributing to the SQuADDS database. The high level steps for contributing to an existing configuration via the SQuADDS API are as follows:\n", "\n", "1. **Select the dataset configuration**: Select the dataset configuration you would like to contribute to. \n", "\n", "2. **Validate your data**: Validate your data against the dataset configuration.\n", "\n", "3. **Submit your data**: Submit your data to the SQuADDS database." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Using the example from [Tutorial 2](https://lfl-lab.github.io/SQuADDS/source/tutorials/Tutorial-2_Simulate_interpolated_designs.html#Extracting-the-data-needed-for-contributing-to-the-dataset), we will now go through each of these steps." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "from squadds.database.contributor import ExistingConfigData" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "ca66c5a51535496991ae5be5b8d268c0", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading readme: 0%| | 0.00/2.25k [00:00, data type in 'ref': \n", "\n", "Missing keys found. These keys are present in one dictionary but not the other:\n", "\n", "Key: design.design_options.cplr_opts.finger_length is missing in 'data'\n", "Key: design.design_options.cplr_opts.cap_gap_ground is missing in 'data'\n", "Key: design.design_options.cplr_opts.cap_width is missing in 'data'\n", "Key: design.design_options.cplr_opts.cap_distance is missing in 'data'\n", "Key: design.design_options.cplr_opts.cap_gap is missing in 'data'\n", "Key: design.design_options.cplr_opts.finger_count is missing in 'data'\n", "Key: design.design_options.cpw_opts.lead.end_straight is missing in 'data'\n", "Key: design.design_options.cpw_opts.lead.start_jogged_extension is missing in 'data'\n" ] } ], "source": [ "data_eigenmode.validate()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "These missing keys can be ignored in the validation as in the case of this configuration there were more than one acceptable coupler type. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "After validation is complete, you can follow the same code as above to contribute your data to the repository to prepare for the final PR." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, let's see how to configure and validate data for a CapNInterdigital capacitance matrix." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "27dddbc4cc5e4823afb50c66e0df5f51", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading readme: 0%| | 0.00/2.48k [00:00\n", "

This code is a part of SQuADDS

\n", "

Developed by Sadman Ahmed Shanto

\n", "

This tutorial is written by Sadman Ahmed Shanto

\n", "

© Copyright Sadman Ahmed Shanto & Eli Levenson-Falk 2023.

\n", "

This code is licensed under the MIT License. You may
obtain a copy of this license in the LICENSE.txt file in the root directory
of this source tree.

\n", "

Any modifications or derivative works of this code must retain this
copyright notice, and modified files need to carry a notice indicating
that they have been altered from the originals.

\n", "" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.11.6 ('qiskit_metal')", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" }, "vscode": { "interpreter": { "hash": "b0f7559acaf54a9a25b7487b1d76179119f6a08c446a22a42471157036c8af6d" } } }, "nbformat": 4, "nbformat_minor": 2 }