{ "cells": [ { "cell_type": "markdown", "id": "f725926d19b2", "metadata": {}, "source": [ "# Tutorial 10: HFSS Driven-Modal Capacitance Extraction for Qubit-Claw and NCap\n", "\n", "**Note from Shanto: I made AI write these tutorials and they are terrible.... will make them easy to read for humans soon**\n", "\n", "This tutorial reruns two capacitance-style SQuADDS geometries in **Ansys HFSS\n", "driven-modal** using **Qiskit Metal** as the geometry/rendering frontend:\n", "\n", "1. a qubit-claw geometry sourced from the `qubit-TransmonCross-cap_matrix`\n", " dataset; and\n", "2. an NCap geometry sourced from the `coupler-NCap-cap_matrix` dataset.\n", "\n", "The goals are:\n", "\n", "- make the driven-modal layer stack explicit;\n", "- checkpoint artifacts so a crash does not force a full restart;\n", "- save raw S/Y parameter artifacts, Touchstone files, and extracted\n", " capacitance-vs-frequency traces; and\n", "- compare the extracted capacitance matrix against the existing Q3D-backed\n", " SQuADDS dataset values.\n", "\n", "This file is written as a Python script with `# %%` cells so it can be run in\n", "VS Code, Spyder, Jupyter, or as a plain Python script on the Windows machine\n", "that has Ansys HFSS installed.\n", "\n" ] }, { "cell_type": "markdown", "id": "4aba1b26fbfd", "metadata": {}, "source": [ "## Physics viewpoint and API viewpoint\n", "\n", "There are two complementary stories in this tutorial:\n", "\n", "1. **Physics story.**\n", " A driven-modal multiport solve gives us a frequency-dependent admittance\n", " matrix. In the low-loss, low-frequency limit, the imaginary part of that\n", " admittance behaves like an effective capacitance network. That lets us ask\n", " whether HFSS driven-modal can reproduce the same Maxwell capacitance\n", " picture that SQuADDS normally stores from Q3D.\n", "\n", "2. **API story.**\n", " We want a reusable, restart-safe workflow that does not hard-code one\n", " geometry. The request/setup/sweep/artifact objects in\n", " `squadds.simulations.drivenmodal.*` are therefore treated as first-class\n", " pieces of the tutorial, not hidden internals. The notebook shows how to\n", " assemble them, run the solve, checkpoint the artifacts, and inspect the\n", " results.\n", "\n" ] }, { "cell_type": "code", "id": "6024de9d949e", "metadata": {}, "source": [ "from __future__ import annotations\n", "\n", "import json\n", "from pathlib import Path\n", "from typing import Any\n", "\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "import pandas as pd\n", "from datasets import load_dataset\n", "from qiskit_metal import Dict\n", "from qiskit_metal.qlibrary.qubits.transmon_cross import TransmonCross\n", "from qiskit_metal.renderers.renderer_ansys.hfss_renderer import QHFSSRenderer\n", "\n", "from squadds.simulations.drivenmodal.artifacts import load_run_manifest, mark_stage_complete\n", "from squadds.simulations.drivenmodal.capacitance import (\n", " capacitance_dataframe_from_y_sweep,\n", " capacitance_matrix_from_y,\n", " maxwell_capacitance_dataframe,\n", ")\n", "from squadds.simulations.drivenmodal.design import (\n", " apply_buffered_chip_bounds,\n", " apply_cryo_silicon_material_properties,\n", " connect_renderer_to_new_ansys_design,\n", " create_multiplanar_design,\n", " ensure_drivenmodal_setup,\n", " format_exception_for_console,\n", " render_drivenmodal_design,\n", " run_drivenmodal_sweep,\n", " safe_ansys_design_name,\n", ")\n", "from squadds.simulations.drivenmodal.hfss_data import (\n", " parameter_dataframe_to_tensor,\n", " write_touchstone_from_dataframe,\n", ")\n", "from squadds.simulations.drivenmodal.hfss_runner import run_drivenmodal_request\n", "from squadds.simulations.drivenmodal.models import (\n", " CapacitanceExtractionRequest,\n", " DrivenModalArtifactPolicy,\n", " DrivenModalLayerStackSpec,\n", " DrivenModalSetupSpec,\n", " DrivenModalSweepSpec,\n", ")\n", "from squadds.simulations.drivenmodal.ports import build_capacitance_port_specs, split_rendered_ports\n", "from squadds.simulations.utils_component_factory import create_ncap_coupler\n", "\n", "try:\n", " from IPython.display import display as display\n", "except ImportError: # pragma: no cover - plain Python fallback for non-notebook execution\n", "\n", " def display(obj):\n", " print(obj)\n", "\n", "\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "id": "9844b10852d2", "metadata": {}, "source": [ "## Runtime knobs\n", "\n", "`RUN_TAG` is the easiest way to force a brand-new AEDT design while keeping the\n", "rest of the script unchanged. Leave `FORCE_RERUN = False` for normal resume\n", "behavior. Set it to `True` only when you intentionally want to regenerate the\n", "HFSS artifacts for the same `RUN_TAG`.\n", "\n" ] }, { "cell_type": "code", "id": "101d12273a87", "metadata": {}, "source": [ "RUN_TAG = \"v3\"\n", "FORCE_RERUN = False\n", "MAX_SOLVE_ATTEMPTS = 3\n", "\n", "REMOTE_REPO_ID = \"SQuADDS/SQuADDS_DB\"\n", "QUBIT_CONFIG = \"qubit-TransmonCross-cap_matrix\"\n", "NCAP_CONFIG = \"coupler-NCap-cap_matrix\"\n", "QUBIT_REFERENCE_INDEX = 3\n", "NCAP_REFERENCE_INDEX = 1\n", "\n", "LAYER_STACK = DrivenModalLayerStackSpec(\n", " preset=\"squadds_hfss_v1\",\n", " chip_name=\"main\",\n", " metal_thickness_um=0.2,\n", " substrate_thickness_um=750.0,\n", ")\n", "SETUP = DrivenModalSetupSpec(\n", " name=\"DrivenModalSetup\",\n", " freq_ghz=5.0,\n", " max_delta_s=0.005,\n", " max_passes=20,\n", " min_passes=2,\n", " min_converged=5,\n", " pct_refinement=30,\n", " basis_order=-1,\n", ")\n", "SWEEP = DrivenModalSweepSpec(\n", " name=\"DrivenModalSweep\",\n", " start_ghz=1.0,\n", " stop_ghz=10.0,\n", " count=400,\n", " sweep_type=\"Interpolating\",\n", " save_fields=False,\n", " interpolation_tol=0.005,\n", " interpolation_max_solutions=400,\n", ")\n", "ARTIFACTS = DrivenModalArtifactPolicy(\n", " export_touchstone=True,\n", " export_y_parameters=True,\n", " export_capacitance_tables=True,\n", " checkpoint_after_stage=True,\n", " resume_existing=True,\n", ")\n", "\n", "RUNTIME_ROOT = Path(\"tutorials/runtime/drivenmodal_capacitance\")\n", "CHECKPOINT_ROOT = RUNTIME_ROOT / \"checkpoints\"\n", "HFSS_PROJECT_ROOT = RUNTIME_ROOT / \"hfss_projects\"\n", "\n", "\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "id": "4d0b23a7e399", "metadata": {}, "source": [ "## Helpers\n", "\n", "\n" ] }, { "cell_type": "code", "id": "0a97cf65d455", "metadata": {}, "source": [ "def ensure_runtime_dirs() -> None:\n", " CHECKPOINT_ROOT.mkdir(parents=True, exist_ok=True)\n", " HFSS_PROJECT_ROOT.mkdir(parents=True, exist_ok=True)\n", "\n", "\n", "def stage_is_complete(manifest_path: Path, stage_name: str) -> bool:\n", " manifest = load_run_manifest(manifest_path)\n", " return manifest[\"stages\"][stage_name][\"status\"] == \"complete\"\n", "\n", "\n", "def dump_json(path: Path, payload: dict[str, Any]) -> None:\n", " path.parent.mkdir(parents=True, exist_ok=True)\n", " path.write_text(json.dumps(payload, indent=2, sort_keys=True, default=str))\n", "\n", "\n", "def prepare_renderer_project(renderer: QHFSSRenderer, project_dir: Path, project_name: str) -> Path:\n", " \"\"\"Create a fresh HFSS project and save it to an absolute AEDT path.\n", "\n", " This avoids two separate older Windows-stack issues:\n", "\n", " 1. the pyEPR project-path duplication bug triggered by passing\n", " ``project_path``/``project_name`` through ``QHFSSRenderer`` options; and\n", " 2. the stale active-project reconnect bug where ``renderer.start()`` binds a\n", " brand-new renderer to the previously active HFSS design/setup before the\n", " tutorial can create its own fresh project for the next run.\n", " \"\"\"\n", " project_dir = project_dir.resolve()\n", " project_dir.mkdir(parents=True, exist_ok=True)\n", " project_file = project_dir / f\"{project_name}.aedt\"\n", "\n", " # ``new_ansys_project()`` creates and activates a blank project through the\n", " # Ansys Desktop API without touching the stale setup state from a prior run.\n", " renderer.new_ansys_project()\n", " renderer.connect_ansys()\n", " renderer.initiated = True\n", " renderer.pinfo.project.save(str(project_file))\n", " return project_file\n", "\n", "\n", "def load_reference_row(config_name: str, index: int) -> dict[str, Any]:\n", " dataset = load_dataset(REMOTE_REPO_ID, config_name, split=\"train\")\n", " return dataset[index]\n", "\n", "\n", "def nearest_frequency_index(freqs_hz: np.ndarray, target_ghz: float) -> int:\n", " return int(np.argmin(np.abs(freqs_hz - target_ghz * 1e9)))\n", "\n", "\n", "def summarize_qubit_claw(maxwell_df: pd.DataFrame) -> dict[str, float]:\n", " return {\n", " \"cross_to_ground\": abs(maxwell_df.loc[\"cross\", \"ground\"]) * 1e15,\n", " \"claw_to_ground\": abs(maxwell_df.loc[\"claw\", \"ground\"]) * 1e15,\n", " \"cross_to_claw\": abs(maxwell_df.loc[\"cross\", \"claw\"]) * 1e15,\n", " \"cross_to_cross\": abs(maxwell_df.loc[\"cross\", \"cross\"]) * 1e15,\n", " \"claw_to_claw\": abs(maxwell_df.loc[\"claw\", \"claw\"]) * 1e15,\n", " \"ground_to_ground\": abs(maxwell_df.loc[\"ground\", \"ground\"]) * 1e15,\n", " }\n", "\n", "\n", "def summarize_ncap(maxwell_df: pd.DataFrame) -> dict[str, float]:\n", " return {\n", " \"top_to_top\": abs(maxwell_df.loc[\"top\", \"top\"]) * 1e15,\n", " \"top_to_bottom\": abs(maxwell_df.loc[\"top\", \"bottom\"]) * 1e15,\n", " \"top_to_ground\": abs(maxwell_df.loc[\"top\", \"ground\"]) * 1e15,\n", " \"bottom_to_bottom\": abs(maxwell_df.loc[\"bottom\", \"bottom\"]) * 1e15,\n", " \"bottom_to_ground\": abs(maxwell_df.loc[\"bottom\", \"ground\"]) * 1e15,\n", " \"ground_to_ground\": abs(maxwell_df.loc[\"ground\", \"ground\"]) * 1e15,\n", " }\n", "\n", "\n", "def compare_against_reference(extracted: dict[str, float], reference: dict[str, float]) -> pd.DataFrame:\n", " rows = []\n", " for key, reference_value in reference.items():\n", " extracted_value = extracted[key]\n", " error_pct = np.nan\n", " if reference_value != 0:\n", " error_pct = 100.0 * (extracted_value - reference_value) / reference_value\n", " rows.append(\n", " {\n", " \"quantity\": key,\n", " \"drivenmodal_fF\": extracted_value,\n", " \"q3d_dataset_fF\": reference_value,\n", " \"percent_error\": error_pct,\n", " }\n", " )\n", " return pd.DataFrame(rows)\n", "\n", "\n", "def plot_capacitance_traces(cap_df: pd.DataFrame, title: str, entries: list[str]) -> None:\n", " freqs_ghz = cap_df[\"frequency_hz\"].to_numpy(dtype=float) / 1e9\n", " plt.figure(figsize=(10, 4))\n", " for entry in entries:\n", " plt.plot(freqs_ghz, cap_df[entry] * 1e15, label=entry.replace(\"_F\", \"\"))\n", " plt.xlabel(\"Frequency (GHz)\")\n", " plt.ylabel(\"Capacitance (fF)\")\n", " plt.title(title)\n", " plt.grid(True, alpha=0.25)\n", " plt.legend()\n", " plt.tight_layout()\n", " plt.show()\n", "\n", "\n", "def build_qubit_claw_request(reference_row: dict[str, Any]) -> CapacitanceExtractionRequest:\n", " run_id = f\"tutorial10-qubit-claw-{QUBIT_REFERENCE_INDEX:03d}-{RUN_TAG}\"\n", " return CapacitanceExtractionRequest(\n", " system_kind=\"qubit_claw\",\n", " design_payload={\n", " \"design_options\": reference_row[\"design\"][\"design_options\"],\n", " \"port_mapping\": {\n", " \"cross\": {\n", " \"component\": \"xmon\",\n", " \"pin\": \"rect_jj\",\n", " \"metadata\": {\"hfss_target\": \"junction\", \"draw_inductor\": False},\n", " },\n", " \"claw\": {\"component\": \"xmon\", \"pin\": \"readout\"},\n", " },\n", " },\n", " layer_stack=LAYER_STACK,\n", " setup=SETUP,\n", " sweep=SWEEP,\n", " artifacts=ARTIFACTS,\n", " metadata={\"run_id\": run_id},\n", " )\n", "\n", "\n", "def build_ncap_request(reference_row: dict[str, Any]) -> CapacitanceExtractionRequest:\n", " run_id = f\"tutorial10-ncap-{NCAP_REFERENCE_INDEX:03d}-{RUN_TAG}\"\n", " return CapacitanceExtractionRequest(\n", " system_kind=\"ncap\",\n", " design_payload={\n", " \"design_options\": reference_row[\"design\"][\"design_options\"],\n", " \"port_mapping\": {\n", " \"top\": {\"component\": \"cplr\", \"pin\": \"prime_start\"},\n", " \"bottom\": {\"component\": \"cplr\", \"pin\": \"second_end\"},\n", " },\n", " },\n", " layer_stack=LAYER_STACK,\n", " setup=SETUP,\n", " sweep=SWEEP,\n", " artifacts=ARTIFACTS,\n", " metadata={\"run_id\": run_id},\n", " )\n", "\n", "\n", "def build_qubit_claw_design(request: CapacitanceExtractionRequest, layer_stack_csv: Path):\n", " design, csv_path = create_multiplanar_design(\n", " layer_stack=request.layer_stack,\n", " layer_stack_path=layer_stack_csv,\n", " enable_renderers=True,\n", " )\n", " TransmonCross(design, \"xmon\", options=Dict(request.design_payload[\"design_options\"]))\n", " design.rebuild()\n", " return design, csv_path\n", "\n", "\n", "def build_ncap_design(request: CapacitanceExtractionRequest, layer_stack_csv: Path):\n", " design, csv_path = create_multiplanar_design(\n", " layer_stack=request.layer_stack,\n", " layer_stack_path=layer_stack_csv,\n", " enable_renderers=True,\n", " )\n", " create_ncap_coupler(dict(request.design_payload[\"design_options\"]), design)\n", " design.rebuild()\n", " return design, csv_path\n", "\n", "\n", "def run_capacitance_demo(\n", " *,\n", " label: str,\n", " request: CapacitanceExtractionRequest,\n", " build_design_fn,\n", " node_names: list[str],\n", " summarize_fn,\n", " reference_summary: dict[str, float],\n", ") -> dict[str, Any]:\n", " prepared = run_drivenmodal_request(request, checkpoint_dir=CHECKPOINT_ROOT)\n", " run_dir = Path(prepared[\"manifest\"][\"run_dir\"])\n", " manifest_path = run_dir / \"manifest.json\"\n", " artifacts_dir = run_dir / \"artifacts\"\n", " artifacts_dir.mkdir(parents=True, exist_ok=True)\n", "\n", " s_pickle = artifacts_dir / \"s_parameters.pkl\"\n", " y_pickle = artifacts_dir / \"y_parameters.pkl\"\n", " z_pickle = artifacts_dir / \"z_parameters.pkl\"\n", " touchstone_path = artifacts_dir / f\"{label}.s2p\"\n", " cap_table_path = artifacts_dir / \"capacitance_vs_frequency.parquet\"\n", " summary_path = artifacts_dir / \"summary.json\"\n", "\n", " if FORCE_RERUN:\n", " print(f\"[{label}] FORCE_RERUN=True. Existing artifacts will be overwritten if HFSS reruns.\")\n", "\n", " if stage_is_complete(manifest_path, \"postprocessed\") and not FORCE_RERUN and summary_path.exists():\n", " print(f\"[{label}] Reusing checkpointed postprocessed outputs from {run_dir}\")\n", " cap_df = pd.read_parquet(cap_table_path)\n", " summary = json.loads(summary_path.read_text())\n", " return {\"capacitance_vs_frequency\": cap_df, \"summary\": summary, \"run_dir\": run_dir}\n", "\n", " if (\n", " stage_is_complete(manifest_path, \"artifacts_exported\")\n", " and not FORCE_RERUN\n", " and s_pickle.exists()\n", " and y_pickle.exists()\n", " ):\n", " print(f\"[{label}] Loading checkpointed solver artifacts from {run_dir}\")\n", " s_df = pd.read_pickle(s_pickle)\n", " y_df = pd.read_pickle(y_pickle)\n", " else:\n", " base_project_dir = HFSS_PROJECT_ROOT / request.metadata[\"run_id\"]\n", " base_project_dir.mkdir(parents=True, exist_ok=True)\n", " dump_json(artifacts_dir / \"resolved_layer_stack.json\", {\"rows\": prepared[\"layer_stack\"]})\n", "\n", " port_specs = build_capacitance_port_specs(request.system_kind, request.design_payload)\n", " port_list, jj_to_port = split_rendered_ports(port_specs)\n", " last_error: BaseException | None = None\n", "\n", " for attempt in range(1, MAX_SOLVE_ATTEMPTS + 1):\n", " attempt_label = f\"attempt-{attempt:02d}\"\n", " attempt_id = f\"{request.metadata['run_id']}-{attempt_label}\"\n", " attempt_project_dir = base_project_dir / attempt_label\n", " attempt_project_dir.mkdir(parents=True, exist_ok=True)\n", " attempt_log_path = artifacts_dir / f\"solver_{attempt_label}.json\"\n", " ansys_design_name = safe_ansys_design_name(attempt_id)\n", " renderer = None\n", " try:\n", " design, layer_stack_csv = build_design_fn(request, artifacts_dir / \"layer_stack.csv\")\n", " renderer = QHFSSRenderer(\n", " design,\n", " initiate=False,\n", " options=Dict(),\n", " )\n", " project_file = prepare_renderer_project(renderer, attempt_project_dir, attempt_id)\n", " connect_renderer_to_new_ansys_design(\n", " renderer,\n", " ansys_design_name,\n", " \"drivenmodal\",\n", " )\n", " renderer.clean_active_design()\n", " cryo_material = apply_cryo_silicon_material_properties(renderer)\n", " dump_json(artifacts_dir / \"material_properties.json\", cryo_material)\n", " chip_box = apply_buffered_chip_bounds(\n", " design,\n", " selection=list(design.components.keys()),\n", " chip_name=request.layer_stack.chip_name,\n", " x_buffer_mm=0.2,\n", " y_buffer_mm=0.2,\n", " )\n", " dump_json(artifacts_dir / \"chip_box.json\", chip_box)\n", "\n", " render_drivenmodal_design(\n", " renderer,\n", " selection=list(design.components.keys()),\n", " port_list=port_list or None,\n", " jj_to_port=jj_to_port or None,\n", " box_plus_buffer=False,\n", " )\n", " mark_stage_complete(manifest_path, \"rendered\")\n", "\n", " setup = ensure_drivenmodal_setup(renderer, **request.setup.to_renderer_kwargs())\n", " mark_stage_complete(manifest_path, \"setup_created\")\n", "\n", " run_drivenmodal_sweep(\n", " renderer,\n", " setup,\n", " setup_name=request.setup.name,\n", " **request.sweep.to_renderer_kwargs(),\n", " )\n", " mark_stage_complete(manifest_path, \"sweep_completed\")\n", "\n", " s_df, y_df, z_df = renderer.get_all_Pparms_matrices(matrix_size=len(port_specs))\n", " s_df.to_pickle(s_pickle)\n", " y_df.to_pickle(y_pickle)\n", " z_df.to_pickle(z_pickle)\n", " write_touchstone_from_dataframe(s_df, matrix_size=len(port_specs), output_path=touchstone_path)\n", " dump_json(\n", " artifacts_dir / \"solver_artifacts.json\",\n", " {\n", " \"touchstone_path\": str(touchstone_path),\n", " \"s_pickle\": str(s_pickle),\n", " \"y_pickle\": str(y_pickle),\n", " \"z_pickle\": str(z_pickle),\n", " \"layer_stack_csv\": str(layer_stack_csv),\n", " \"project_dir\": str(attempt_project_dir),\n", " \"project_file\": str(project_file),\n", " \"ansys_design_name\": ansys_design_name,\n", " \"attempt_label\": attempt_label,\n", " \"chip_box\": chip_box,\n", " },\n", " )\n", " dump_json(\n", " attempt_log_path,\n", " {\n", " \"status\": \"success\",\n", " \"attempt_label\": attempt_label,\n", " \"project_dir\": str(attempt_project_dir),\n", " \"project_file\": str(project_file),\n", " \"ansys_design_name\": ansys_design_name,\n", " \"chip_box\": chip_box,\n", " },\n", " )\n", " mark_stage_complete(manifest_path, \"artifacts_exported\")\n", " break\n", " except Exception as exc:\n", " last_error = exc\n", " dump_json(\n", " attempt_log_path,\n", " {\n", " \"status\": \"failed\",\n", " \"attempt_label\": attempt_label,\n", " \"project_dir\": str(attempt_project_dir),\n", " \"ansys_design_name\": ansys_design_name,\n", " \"error\": format_exception_for_console(exc),\n", " },\n", " )\n", " print(\n", " f\"[{label}] HFSS solve {attempt}/{MAX_SOLVE_ATTEMPTS} failed: {format_exception_for_console(exc)}\"\n", " )\n", " if attempt == MAX_SOLVE_ATTEMPTS:\n", " raise\n", " print(f\"[{label}] Retrying with a fresh internal HFSS design...\")\n", " finally:\n", " if renderer is not None:\n", " try:\n", " renderer.disconnect_ansys()\n", " except Exception as exc: # pragma: no cover - best effort cleanup on the HFSS machine\n", " print(f\"[{label}] Warning while disconnecting Ansys: {format_exception_for_console(exc)}\")\n", " else: # pragma: no cover - defensive guard for analyzers that exit the loop unexpectedly\n", " if last_error is not None:\n", " raise last_error\n", "\n", " freqs_hz, y_matrices = parameter_dataframe_to_tensor(y_df, matrix_size=2, parameter_prefix=\"Y\")\n", " cap_df = capacitance_dataframe_from_y_sweep(freqs_hz, y_matrices, node_names=node_names)\n", " cap_df.to_parquet(cap_table_path, index=False)\n", "\n", " ref_index = nearest_frequency_index(freqs_hz, request.setup.freq_ghz)\n", " maxwell_df = maxwell_capacitance_dataframe(\n", " capacitance_matrix_from_y(freqs_hz[ref_index], y_matrices[ref_index]),\n", " node_names=node_names,\n", " )\n", " extracted_summary = summarize_fn(maxwell_df)\n", " comparison_df = compare_against_reference(extracted_summary, reference_summary)\n", " comparison_df.to_csv(artifacts_dir / \"comparison.csv\", index=False)\n", "\n", " summary = {\n", " \"reference_frequency_ghz\": freqs_hz[ref_index] / 1e9,\n", " \"reference_summary_fF\": reference_summary,\n", " \"drivenmodal_summary_fF\": extracted_summary,\n", " \"comparison_rows\": comparison_df.to_dict(orient=\"records\"),\n", " \"artifacts\": {\n", " \"touchstone_path\": str(touchstone_path),\n", " \"capacitance_table\": str(cap_table_path),\n", " \"comparison_csv\": str(artifacts_dir / \"comparison.csv\"),\n", " },\n", " }\n", " dump_json(summary_path, summary)\n", " mark_stage_complete(manifest_path, \"postprocessed\")\n", " return {\"capacitance_vs_frequency\": cap_df, \"summary\": summary, \"run_dir\": run_dir}\n", "\n", "\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "id": "b87fabe1ce15", "metadata": {}, "source": [ "## Load reference dataset rows\n", "\n" ] }, { "cell_type": "code", "id": "b43274fae520", "metadata": {}, "source": [ "ensure_runtime_dirs()\n", "\n", "qubit_reference_row = load_reference_row(QUBIT_CONFIG, QUBIT_REFERENCE_INDEX)\n", "ncap_reference_row = load_reference_row(NCAP_CONFIG, NCAP_REFERENCE_INDEX)\n", "\n", "print(f\"Qubit-claw reference design options (index={QUBIT_REFERENCE_INDEX}):\")\n", "print(json.dumps(qubit_reference_row[\"design\"][\"design_options\"], indent=2))\n", "print(f\"\\nNCap reference design options (index={NCAP_REFERENCE_INDEX}):\")\n", "print(json.dumps(ncap_reference_row[\"design\"][\"design_options\"], indent=2))\n", "\n", "\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "id": "2bf58b66ca02", "metadata": {}, "source": [ "## Run the qubit-claw driven-modal extraction\n", "\n", "The qubit-claw run uses:\n", "\n", "- one lumped port on the readout claw connector pin; and\n", "- one lumped JJ port rendered on the transmon junction element.\n", "\n" ] }, { "cell_type": "code", "id": "86a17e0de637", "metadata": {}, "source": [ "qubit_request = build_qubit_claw_request(qubit_reference_row)\n", "qubit_reference_summary = {\n", " key: float(qubit_reference_row[\"sim_results\"][key])\n", " for key in [\n", " \"cross_to_ground\",\n", " \"claw_to_ground\",\n", " \"cross_to_claw\",\n", " \"cross_to_cross\",\n", " \"claw_to_claw\",\n", " \"ground_to_ground\",\n", " ]\n", "}\n", "\n", "qubit_result = run_capacitance_demo(\n", " label=\"qubit_claw\",\n", " request=qubit_request,\n", " build_design_fn=build_qubit_claw_design,\n", " node_names=[\"cross\", \"claw\"],\n", " summarize_fn=summarize_qubit_claw,\n", " reference_summary=qubit_reference_summary,\n", ")\n", "\n", "qubit_comparison_df = pd.DataFrame(qubit_result[\"summary\"][\"comparison_rows\"])\n", "display(qubit_comparison_df)\n", "\n", "\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "id": "76a06bea34ca", "metadata": {}, "source": [ "## Run the NCap driven-modal extraction\n", "\n" ] }, { "cell_type": "code", "id": "71b8cbd3efb8", "metadata": {}, "source": [ "ncap_request = build_ncap_request(ncap_reference_row)\n", "ncap_reference_summary = {\n", " key: float(ncap_reference_row[\"sim_results\"][key])\n", " for key in [\n", " \"top_to_top\",\n", " \"top_to_bottom\",\n", " \"top_to_ground\",\n", " \"bottom_to_bottom\",\n", " \"bottom_to_ground\",\n", " \"ground_to_ground\",\n", " ]\n", "}\n", "\n", "ncap_result = run_capacitance_demo(\n", " label=\"ncap\",\n", " request=ncap_request,\n", " build_design_fn=build_ncap_design,\n", " node_names=[\"top\", \"bottom\"],\n", " summarize_fn=summarize_ncap,\n", " reference_summary=ncap_reference_summary,\n", ")\n", "\n", "ncap_comparison_df = pd.DataFrame(ncap_result[\"summary\"][\"comparison_rows\"])\n", "display(ncap_comparison_df)\n", "\n", "\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "id": "a61a50310bf4", "metadata": {}, "source": [ "## Plot capacitance-vs-frequency traces\n", "\n" ] }, { "cell_type": "code", "id": "2d76c7b0562f", "metadata": {}, "source": [ "plot_capacitance_traces(\n", " qubit_result[\"capacitance_vs_frequency\"],\n", " title=\"Qubit-claw capacitance vs frequency\",\n", " entries=[\n", " \"cross__cross_F\",\n", " \"cross__claw_F\",\n", " \"claw__claw_F\",\n", " ],\n", ")\n", "\n", "plot_capacitance_traces(\n", " ncap_result[\"capacitance_vs_frequency\"],\n", " title=\"NCap capacitance vs frequency\",\n", " entries=[\n", " \"top__top_F\",\n", " \"top__bottom_F\",\n", " \"bottom__bottom_F\",\n", " ],\n", ")\n", "\n", "\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "id": "9bc6ce02ebfc", "metadata": {}, "source": [ "## Inspect the explicit layer stack and checkpoint outputs\n", "\n", "The simulation artifacts live under:\n", "\n", "- `tutorials/runtime/drivenmodal_capacitance/checkpoints//`\n", "- `tutorials/runtime/drivenmodal_capacitance/hfss_projects//`\n", "\n", "Each run directory includes:\n", "\n", "- `manifest.json` with resumable stage state;\n", "- `artifacts/layer_stack.csv` and `artifacts/resolved_layer_stack.json`;\n", "- `artifacts/*.pkl` for raw complex S/Y/Z matrices;\n", "- `artifacts/*.s2p` Touchstone exports; and\n", "- `artifacts/capacitance_vs_frequency.parquet` for fast downstream analysis.\n", "\n" ] }, { "cell_type": "code", "id": "1ee530227553", "metadata": {}, "source": [ "print(\"Qubit-claw run directory:\", qubit_result[\"run_dir\"])\n", "print(\"NCap run directory:\", ncap_result[\"run_dir\"])\n", "\n", "qubit_layer_stack = pd.read_csv(qubit_result[\"run_dir\"] / \"artifacts\" / \"layer_stack.csv\")\n", "ncap_layer_stack = pd.read_csv(ncap_result[\"run_dir\"] / \"artifacts\" / \"layer_stack.csv\")\n", "\n", "print(\"\\nQubit-claw layer stack:\")\n", "display(qubit_layer_stack)\n", "print(\"\\nNCap layer stack:\")\n", "display(ncap_layer_stack)\n", "\n", "\n" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "id": "7289c1452c87", "metadata": {}, "source": [ "## Dataset and API outlook\n", "\n", "The long-term SQuADDS driven-modal dataset layout for capacitance-style runs is\n", "expected to include:\n", "\n", "- a compact summary row in `SQuADDS_DB` with:\n", " - geometry and layer-stack metadata,\n", " - solver setup metadata,\n", " - reference-frequency capacitance summaries,\n", " - links to heavy artifacts, and\n", " - provenance for restart/reproduction;\n", "- heavy sidecar artifacts containing:\n", " - Touchstone files,\n", " - dense Y-parameter tables,\n", " - capacitance-vs-frequency traces, and\n", " - checkpoint manifests / postprocessing summaries.\n", "\n", "The API direction is:\n", "\n", "- `CapacitanceExtractionRequest(...)` to declare geometry, layer stack, setup,\n", " sweep, and artifact policy;\n", "- `AnsysSimulator.run_drivenmodal(request)` or the lower-level driven-modal\n", " runner to initialize checkpoint state; and\n", "- small reusable postprocessing helpers to load S/Y data, compute\n", " capacitance-vs-frequency, export Touchstone files, and compare against\n", " existing Q3D-backed records.\n", "\n" ] }, { "cell_type": "markdown", "id": "ae72704f7f71", "metadata": {}, "source": [ "## License\n", "\n", "
\n", "

This code is a part of SQuADDS

\n", "

Developed by Sadman Ahmed Shanto

\n", "

This tutorial is written by Sadman Ahmed Shanto and OpenAI Codex

\n", "

© Copyright Sadman Ahmed Shanto & Eli Levenson-Falk 2024.

\n", "

This code is licensed under the MIT License. You may
obtain a copy of this license in the LICENSE.txt file in the root directory
of this source tree.

\n", "

Any modifications or derivative works of this code must retain this
copyright notice, and modified files need to carry a notice indicating
that they have been altered from the originals.

\n", "
\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.11" } }, "nbformat": 4, "nbformat_minor": 5 }