diff --git a/CHANGELOG.md b/CHANGELOG.md
index 45251a8..81989fd 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,6 @@
# Changelog
-## 1.9.2 (2025-10-16)
+## 1.9.4 (2025-10-16)
- added `--output-file` argument
- added stack level `exports` - pre defined `stack_name`, `stack_env`, `elapsed_time` and user defined
diff --git a/README.md b/README.md
index fe97e84..f23864b 100644
--- a/README.md
+++ b/README.md
@@ -1,372 +1,387 @@
-
-
-[logo]: https://stackql.io/img/stackql-logo-bold.png "stackql logo"
-[deploylogo]: https://stackql.io/img/stackql-deploy-logo.png "stackql-deploy logo"
-[stackqlrepo]: https://github.com/stackql/stackql
-[homepage]: https://stackql.io/
-[docs]: https://stackql.io/docs
-[blog]: https://stackql.io/blog
-[registry]: https://github.com/stackql/stackql-provider-registry
-
-
-
-[pypi]: https://pypi.org/project/stackql-deploy/
-
-
-
-[badge1]: https://img.shields.io/badge/platform-windows%20macos%20linux-brightgreen "Platforms"
-[badge2]: https://img.shields.io/pypi/v/stackql-deploy "PyPi Version"
-[badge3]: https://img.shields.io/pypi/dm/stackql-deploy "PyPi Downloads"
-[badge4]: https://img.shields.io/github/license/stackql/stackql "License"
-
-
-
-[discussions]: https://github.com/orgs/stackql/discussions
-[issues]: https://github.com/stackql/stackql-deploy/issues/new/choose
-
-
-
-[twitter]: https://twitter.com/stackql
-
-
-
-
-[![logo]][stackqlrepo]
-![badge1]
-![badge2]
-![badge3]
-![badge4]
-
-
-
-
-### Model driven resource provisioning and deployment framework using StackQL.
-
-
-
-
-
-[**PyPi**][pypi]
-[**Raise an Issue**][issues]
-
-
-
-
-## About The Project
-
-[**`stackql-deploy`**][pypi] is an open-source command line utility which implements a declarative, model driven framework to deploy and manage multi cloud stacks using [**`stackql`**][stackqlrepo]. [**`stackql-deploy`**][pypi] is distributed as a Python script to be used as a CLI tool, do the following to get started:
-
-
-```bash
-pip install stackql-deploy
-```
-
-> **Note for macOS users**
-> to install `stackql-deploy` in a virtual environment (which may be necessary on **macOS**), use the following:
->
-> ```bash
-> python3 -m venv myenv
-> source myenv/bin/activate
-> pip install stackql-deploy
-> ```
-
-## About StackQL
-
-StackQL is a utility which allows you to query and interact with cloud and SaaS resources in real time using SQL grammar. StackQL supports a full set of SQL query/DML grammar, including `JOIN`, `UNION` adn subquery functionality and supports mutation operations on cloud and SaaS resources such as `create`, `update` and `delete`, implemented as `INSERT`, `UPDATE` and `DELETE` respectively. StackQL also supports grammar for performing lifecycle operations such as starting or stopping a VM using the `EXEC` statement.
-
-StackQL provider definitions are defined in plaintext OpenAPI extensions to the providers specification. These definitions are then used to generate the SQL schema and the API client. The source for the provider definitions are stored in the [**StackQL Registry**][registry].
-
-## How it works
-
-
-
-A **`stackql-deploy`** project is a directory containing StackQL scripts with a manifest file at the root of the directory, for example:
-
-```
-├── example_stack
-│ ├── resources
-│ │ └── monitor_resource_group.iql
-│ ├── stackql_manifest.yml
-```
-
-the `stackql_manifest.yml` defines the resources in the stackql with their properties which can be scoped by environments, for example:
-
-```yaml
-version: 1
-name: example_stack
-description: oss activity monitor stack
-providers:
- - azure
-globals:
- - name: subscription_id
- description: azure subscription id
- value: "{{ vars.AZURE_SUBSCRIPTION_ID }}"
- - name: location
- value: eastus
- - name: resource_group_name_base
- value: "activity-monitor"
-resources:
- - name: monitor_resource_group
- description: azure resource group for activity monitor
- props:
- - name: resource_group_name
- description: azure resource group name
- value: "{{ globals.resource_group_name_base }}-{{ globals.stack_env }}"
- # OR YOU CAN DO...
- # values:
- # prd:
- # value: "activity-monitor-prd"
- # sit:
- # value: "activity-monitor-sit"
- # dev:
- # value: "activity-monitor-dev"
-```
-
-> use `stackql-deploy init {stack_name}` to create a project directory with sample files
-
-Deployment orchestration using `stackql-deploy` includes:
-
-- **_pre-flight_** checks, which are StackQL queries that check for the existence or current configuration state of a resource
-- **_deployment_** scripts, which are StackQL queries to create or update resoruces (or delete in the case of de-provisioning)
-- **_post-deployment_** tests, which are StackQL queries to confirm that resources were deployed and have the desired state
-
-**Performance Optimization**: `stackql-deploy` uses an intelligent query optimization strategy which is described here:
-
-```mermaid
-graph TB
- A[Start] --> B{foreach\nresource}
- B --> C{exports query\navailable?}
- C -- Yes --> D[try exports first\n🔄 optimal path]
- C -- No --> E[exists\ncheck]
- D --> F{exports\nsuccess?}
- F -- Yes --> G[✅ validated with\n1 query only]
- F -- No --> E
- E --> H{resource\nexists?}
- H -- Yes --> I[run update\nor createorupdate query]
- H -- No --> J[run create\nor createorupdate query]
- I --> K[run statecheck check]
- J --> K
- G --> L[reuse exports result]
- K --> M{End}
- L --> M
-```
-
-### `INSERT`, `UPDATE`, `DELETE` queries
-
-Mutation operations are defined as `.iql` files in the `resources` directory, these are templates that are rendered with properties or environment context variables at run time, for example:
-
-```sql
-/*+ create */
-INSERT INTO azure.resources.resource_groups(
- resourceGroupName,
- subscriptionId,
- data__location
-)
-SELECT
- '{{ resource_group_name }}',
- '{{ subscription_id }}',
- '{{ location }}'
-
-/*+ update */
-UPDATE azure.resources.resource_groups
-SET data__location = '{{ location }}'
-WHERE resourceGroupName = '{{ resource_group_name }}'
- AND subscriptionId = '{{ subscription_id }}'
-
-/*+ delete */
-DELETE FROM azure.resources.resource_groups
-WHERE resourceGroupName = '{{ resource_group_name }}' AND subscriptionId = '{{ subscription_id }}'
-```
-
-### Test Queries
-
-Test files are defined as `.iql` files in the `resources` directory, these files define the per-flight and post-deploy checks to be performed, for example:
-
-```sql
-/*+ exists */
-SELECT COUNT(*) as count FROM azure.resources.resource_groups
-WHERE subscriptionId = '{{ subscription_id }}'
-AND resourceGroupName = '{{ resource_group_name }}'
-
-/*+ statecheck, retries=5, retry_delay=5 */
-SELECT COUNT(*) as count FROM azure.resources.resource_groups
-WHERE subscriptionId = '{{ subscription_id }}'
-AND resourceGroupName = '{{ resource_group_name }}'
-AND location = '{{ location }}'
-AND JSON_EXTRACT(properties, '$.provisioningState') = 'Succeeded'
-
-/*+ exports */
-SELECT resourceGroupName, location, JSON_EXTRACT(properties, '$.provisioningState') as state
-FROM azure.resources.resource_groups
-WHERE subscriptionId = '{{ subscription_id }}'
-AND resourceGroupName = '{{ resource_group_name }}'
-```
-
-### Query Optimization
-
-`stackql-deploy` implements intelligent query optimization that significantly improves performance:
-
-**Traditional Flow (3 queries):**
-1. `exists` - check if resource exists
-2. `statecheck` - validate resource configuration
-3. `exports` - extract variables for dependent resources
-
-**Optimized Flow (1 query in happy path):**
-1. **Try `exports` first** - if this succeeds, it validates existence, state, and extracts variables in one operation
-2. **Fallback to traditional flow** only if exports fails
-
-**Performance Benefits:**
-- Up to **66% reduction** in API calls for existing, correctly configured resources
-- **2-3x faster** deployments in typical scenarios
-- Maintains full validation integrity and backward compatibility
-
-**Best Practice:** Design your `exports` queries to include the validation logic from `statecheck` queries to maximize the benefits of this optimization.
-
-## Usage
-
-
-
-Once installed, use the `build`, `test`, or `teardown` commands as shown here:
-
-```
-stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000 --dry-run
-stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
-stackql-deploy test prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
-stackql-deploy teardown prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
-```
-
-> **Note:** `teardown` deprovisions resources in reverse order to creation
-
-Additional options include:
-
-- `--dry-run`: perform a dry run of the stack operations.
-- `--on-failure=rollback`: action on failure: rollback, ignore or error.
-- `--env-file=.env`: specify an environment variable file.
-- `-e KEY=value`: pass additional environment variables.
-- `--log-level`: logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL), defaults to INFO.
-
-Use `stackql-deploy info` to show information about the package and environment, for example:
-
-```bash
-$ stackql-deploy info
-stackql-deploy CLI
- Version: 1.7.7
-
-StackQL Library
- Version: v0.5.748
- pystackql Version: 3.7.0
- Platform: Linux x86_64 (Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35), Python 3.10.12
- Binary Path: `/mnt/c/LocalGitRepos/stackql/stackql-deploy/stackql`
-
-Installed Providers
- aws: v24.07.00246
- azure: v23.03.00121
- google: v24.09.00251
-```
-
-Use the `--help` option to see more information about the commands and options available:
-
-```
-stackql-deploy --help
-```
-
-### Tab Completion
-
-**stackql-deploy** supports tab completion for commands and options across multiple shells. To enable tab completion:
-
-```bash
-eval "$(stackql-deploy completion bash)" # activate now
-stackql-deploy completion bash --install # install permanently
-stackql-deploy completion # auto-detect shell
-```
-
-## Building and Testing Locally
-
-To get started with **stackql-deploy**, install it locally using pip:
-
-```bash
-python3 -m venv venv
-source venv/bin/activate
-pip install -e .
-# ...
-deactivate
-rm -rf venv/
-```
-
-### To Remove the Locally Installed Package
-
-```
-pip uninstall stackql-deploy
-pip cache purge
-```
-
-## Building and Deploying to PyPI
-
-To distribute **stackql-deploy** on PyPI, you'll need to ensure that you have all required files set up correctly in your project directory. This typically includes your `setup.py`, `README.rst`, `LICENSE`, and any other necessary files.
-
-### Building the Package
-
-First, ensure you have the latest versions of `setuptools` and `wheel` installed:
-
-```bash
-python3 -m venv venv
-source venv/bin/activate
-# pip install --upgrade setuptools wheel
-pip install --upgrade build
-```
-
-Then, navigate to your project root directory and build the distribution files:
-
-```bash
-rm dist/stackql_deploy*
-python3 -m build
-# or
-# python3 setup.py sdist bdist_wheel
-```
-
-This command generates distribution packages in the `dist/` directory.
-
-### Uploading the Package to PyPI
-
-To upload the package to PyPI, you'll need to use `twine`, a utility for publishing Python packages. First, install `twine`:
-
-```
-pip install twine
-```
-
-Then, use `twine` to upload all of the archives under `dist/`:
-
-```
-twine upload --config-file .pypirc dist/*
-```
-
-### Building the Docs
-
-Navigate to your `docs` directory and build the Sphinx documentation:
-
-```
-cd docs
-make html
-```
-
-## Code Linting
-
-To maintain code quality and consistency, we use `ruff` as the linter for this project. `ruff` offers fast performance and a comprehensive set of linting rules suitable for `stackql-deploy`. You can run the lint check as follows:
-
-```bash
-ruff check .
-```
-
-Note: If you need to install ruff, you can do so with `pip install ruff`.
-
-## Contributing
-
-Contributions are welcome and encouraged.
-
-## License
-
-Distributed under the MIT License. See [`LICENSE`](https://github.com/stackql/stackql-deploy/blob/main/LICENSE) for more information.
-
-## Contact
-
-Get in touch with us via Twitter at [**@stackql**][twitter], email us at [**info@stackql.io**](info@stackql.io) or start a conversation using [**discussions**][discussions].
+
+
+[logo]: https://stackql.io/img/stackql-logo-bold.png "stackql logo"
+[deploylogo]: https://stackql.io/img/stackql-deploy-logo.png "stackql-deploy logo"
+[stackqlrepo]: https://github.com/stackql/stackql
+[homepage]: https://stackql.io/
+[docs]: https://stackql.io/docs
+[blog]: https://stackql.io/blog
+[registry]: https://github.com/stackql/stackql-provider-registry
+
+[pypi]: https://pypi.org/project/stackql-deploy/
+
+
+
+[badge1]: https://img.shields.io/badge/platform-windows%20macos%20linux-brightgreen "Platforms"
+[badge2]: https://img.shields.io/pypi/v/stackql-deploy "PyPi Version"
+[badge3]: https://img.shields.io/pypi/dm/stackql-deploy "PyPi Downloads"
+[badge4]: https://img.shields.io/github/license/stackql/stackql "License"
+
+
+
+[discussions]: https://github.com/orgs/stackql/discussions
+[issues]: https://github.com/stackql/stackql-deploy/issues/new/choose
+
+
+
+[twitter]: https://twitter.com/stackql
+
+> [!IMPORTANT]
+> **This repository is archived.** The Python implementation of `stackql-deploy` has been superseded by a full Rust rewrite released as version 2.x. The project has moved - nothing has been deleted, and all existing functionality is available in the new version.
+>
+> **Go here instead:**
+>
+> - **GitHub** - https://github.com/stackql/stackql-deploy-rs
+> - **crates.io** - https://crates.io/crates/stackql-deploy
+> - **Docs and website** - https://stackql-deploy.io/
+>
+> Install the new version with:
+> ```bash
+> cargo install stackql-deploy
+> ```
+> The `.iql` resource files and `stackql_manifest.yml` format are compatible with v2.x. See the [new repo](https://github.com/stackql/stackql-deploy-rs) for migration notes.
+
+---
+
+## Archive Notice
+
+What follows is the original README for the Python (v1.x) implementation, preserved for reference. This package on PyPI will no longer receive updates.
+
+---
+
+
+
+
+[![logo]][stackqlrepo]
+![badge1]
+![badge2]
+![badge3]
+![badge4]
+
+
+
+
+### Model driven resource provisioning and deployment framework using StackQL.
+
+
+
+[**PyPi**][pypi]
+[**Raise an Issue**][issues]
+
+
+
+
+## About The Project
+
+[**`stackql-deploy`**][pypi] is an open-source command line utility which implements a declarative, model driven framework to deploy and manage multi cloud stacks using [**`stackql`**][stackqlrepo]. [**`stackql-deploy`**][pypi] is distributed as a Python script to be used as a CLI tool, do the following to get started:
+
+
+```bash
+pip install stackql-deploy
+```
+
+> **Note for macOS users**
+> to install `stackql-deploy` in a virtual environment (which may be necessary on **macOS**), use the following:
+>
+> ```bash
+> python3 -m venv myenv
+> source myenv/bin/activate
+> pip install stackql-deploy
+> ```
+
+## About StackQL
+
+StackQL is a utility which allows you to query and interact with cloud and SaaS resources in real time using SQL grammar. StackQL supports a full set of SQL query/DML grammar, including `JOIN`, `UNION` adn subquery functionality and supports mutation operations on cloud and SaaS resources such as `create`, `update` and `delete`, implemented as `INSERT`, `UPDATE` and `DELETE` respectively. StackQL also supports grammar for performing lifecycle operations such as starting or stopping a VM using the `EXEC` statement.
+
+StackQL provider definitions are defined in plaintext OpenAPI extensions to the providers specification. These definitions are then used to generate the SQL schema and the API client. The source for the provider definitions are stored in the [**StackQL Registry**][registry].
+
+## How it works
+
+A **`stackql-deploy`** project is a directory containing StackQL scripts with a manifest file at the root of the directory, for example:
+
+```
+├── example_stack
+│ ├── resources
+│ │ └── monitor_resource_group.iql
+│ ├── stackql_manifest.yml
+```
+
+the `stackql_manifest.yml` defines the resources in the stackql with their properties which can be scoped by environments, for example:
+
+```yaml
+version: 1
+name: example_stack
+description: oss activity monitor stack
+providers:
+ - azure
+globals:
+ - name: subscription_id
+ description: azure subscription id
+ value: "{{ vars.AZURE_SUBSCRIPTION_ID }}"
+ - name: location
+ value: eastus
+ - name: resource_group_name_base
+ value: "activity-monitor"
+resources:
+ - name: monitor_resource_group
+ description: azure resource group for activity monitor
+ props:
+ - name: resource_group_name
+ description: azure resource group name
+ value: "{{ globals.resource_group_name_base }}-{{ globals.stack_env }}"
+ # OR YOU CAN DO...
+ # values:
+ # prd:
+ # value: "activity-monitor-prd"
+ # sit:
+ # value: "activity-monitor-sit"
+ # dev:
+ # value: "activity-monitor-dev"
+```
+
+> use `stackql-deploy init {stack_name}` to create a project directory with sample files
+
+Deployment orchestration using `stackql-deploy` includes:
+
+- **_pre-flight_** checks, which are StackQL queries that check for the existence or current configuration state of a resource
+- **_deployment_** scripts, which are StackQL queries to create or update resoruces (or delete in the case of de-provisioning)
+- **_post-deployment_** tests, which are StackQL queries to confirm that resources were deployed and have the desired state
+
+**Performance Optimization**: `stackql-deploy` uses an intelligent query optimization strategy which is described here:
+
+```mermaid
+graph TB
+ A[Start] --> B{foreach\nresource}
+ B --> C{exports query\navailable?}
+ C -- Yes --> D[try exports first\n🔄 optimal path]
+ C -- No --> E[exists\ncheck]
+ D --> F{exports\nsuccess?}
+ F -- Yes --> G[✅ validated with\n1 query only]
+ F -- No --> E
+ E --> H{resource\nexists?}
+ H -- Yes --> I[run update\nor createorupdate query]
+ H -- No --> J[run create\nor createorupdate query]
+ I --> K[run statecheck check]
+ J --> K
+ G --> L[reuse exports result]
+ K --> M{End}
+ L --> M
+```
+
+### `INSERT`, `UPDATE`, `DELETE` queries
+
+Mutation operations are defined as `.iql` files in the `resources` directory, these are templates that are rendered with properties or environment context variables at run time, for example:
+
+```sql
+/*+ create */
+INSERT INTO azure.resources.resource_groups(
+ resourceGroupName,
+ subscriptionId,
+ data__location
+)
+SELECT
+ '{{ resource_group_name }}',
+ '{{ subscription_id }}',
+ '{{ location }}'
+
+/*+ update */
+UPDATE azure.resources.resource_groups
+SET data__location = '{{ location }}'
+WHERE resourceGroupName = '{{ resource_group_name }}'
+ AND subscriptionId = '{{ subscription_id }}'
+
+/*+ delete */
+DELETE FROM azure.resources.resource_groups
+WHERE resourceGroupName = '{{ resource_group_name }}' AND subscriptionId = '{{ subscription_id }}'
+```
+
+### Test Queries
+
+Test files are defined as `.iql` files in the `resources` directory, these files define the per-flight and post-deploy checks to be performed, for example:
+
+```sql
+/*+ exists */
+SELECT COUNT(*) as count FROM azure.resources.resource_groups
+WHERE subscriptionId = '{{ subscription_id }}'
+AND resourceGroupName = '{{ resource_group_name }}'
+
+/*+ statecheck, retries=5, retry_delay=5 */
+SELECT COUNT(*) as count FROM azure.resources.resource_groups
+WHERE subscriptionId = '{{ subscription_id }}'
+AND resourceGroupName = '{{ resource_group_name }}'
+AND location = '{{ location }}'
+AND JSON_EXTRACT(properties, '$.provisioningState') = 'Succeeded'
+
+/*+ exports */
+SELECT resourceGroupName, location, JSON_EXTRACT(properties, '$.provisioningState') as state
+FROM azure.resources.resource_groups
+WHERE subscriptionId = '{{ subscription_id }}'
+AND resourceGroupName = '{{ resource_group_name }}'
+```
+
+### Query Optimization
+
+`stackql-deploy` implements intelligent query optimization that significantly improves performance:
+
+**Traditional Flow (3 queries):**
+1. `exists` - check if resource exists
+2. `statecheck` - validate resource configuration
+3. `exports` - extract variables for dependent resources
+
+**Optimized Flow (1 query in happy path):**
+1. **Try `exports` first** - if this succeeds, it validates existence, state, and extracts variables in one operation
+2. **Fallback to traditional flow** only if exports fails
+
+**Performance Benefits:**
+- Up to **66% reduction** in API calls for existing, correctly configured resources
+- **2-3x faster** deployments in typical scenarios
+- Maintains full validation integrity and backward compatibility
+
+**Best Practice:** Design your `exports` queries to include the validation logic from `statecheck` queries to maximize the benefits of this optimization.
+
+## Usage
+
+Once installed, use the `build`, `test`, or `teardown` commands as shown here:
+
+```
+stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000 --dry-run
+stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
+stackql-deploy test prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
+stackql-deploy teardown prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
+```
+
+> **Note:** `teardown` deprovisions resources in reverse order to creation
+
+Additional options include:
+
+- `--dry-run`: perform a dry run of the stack operations.
+- `--on-failure=rollback`: action on failure: rollback, ignore or error.
+- `--env-file=.env`: specify an environment variable file.
+- `-e KEY=value`: pass additional environment variables.
+- `--log-level`: logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL), defaults to INFO.
+
+Use `stackql-deploy info` to show information about the package and environment, for example:
+
+```bash
+$ stackql-deploy info
+stackql-deploy CLI
+ Version: 1.7.7
+
+StackQL Library
+ Version: v0.5.748
+ pystackql Version: 3.7.0
+ Platform: Linux x86_64 (Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35), Python 3.10.12
+ Binary Path: `/mnt/c/LocalGitRepos/stackql/stackql-deploy/stackql`
+
+Installed Providers
+ aws: v24.07.00246
+ azure: v23.03.00121
+ google: v24.09.00251
+```
+
+Use the `--help` option to see more information about the commands and options available:
+
+```
+stackql-deploy --help
+```
+
+### Tab Completion
+
+**stackql-deploy** supports tab completion for commands and options across multiple shells. To enable tab completion:
+
+```bash
+eval "$(stackql-deploy completion bash)" # activate now
+stackql-deploy completion bash --install # install permanently
+stackql-deploy completion # auto-detect shell
+```
+
+## Building and Testing Locally
+
+To get started with **stackql-deploy**, install it locally using pip:
+
+```bash
+python3 -m venv venv
+source venv/bin/activate
+pip install -e .
+# ...
+deactivate
+rm -rf venv/
+```
+
+### To Remove the Locally Installed Package
+
+```
+pip uninstall stackql-deploy
+pip cache purge
+```
+
+## Building and Deploying to PyPI
+
+To distribute **stackql-deploy** on PyPI, you'll need to ensure that you have all required files set up correctly in your project directory. This typically includes your `setup.py`, `README.rst`, `LICENSE`, and any other necessary files.
+
+### Building the Package
+
+First, ensure you have the latest versions of `setuptools` and `wheel` installed:
+
+```bash
+python3 -m venv venv
+source venv/bin/activate
+# pip install --upgrade setuptools wheel
+pip install --upgrade build
+```
+
+Then, navigate to your project root directory and build the distribution files:
+
+```bash
+rm dist/stackql_deploy*
+python3 -m build
+# or
+# python3 setup.py sdist bdist_wheel
+```
+
+This command generates distribution packages in the `dist/` directory.
+
+### Uploading the Package to PyPI
+
+To upload the package to PyPI, you'll need to use `twine`, a utility for publishing Python packages. First, install `twine`:
+
+```
+pip install twine
+```
+
+Then, use `twine` to upload all of the archives under `dist/`:
+
+```
+twine upload --config-file .pypirc dist/*
+```
+
+### Building the Docs
+
+Navigate to your `docs` directory and build the Sphinx documentation:
+
+```
+cd docs
+make html
+```
+
+## Code Linting
+
+To maintain code quality and consistency, we use `ruff` as the linter for this project. `ruff` offers fast performance and a comprehensive set of linting rules suitable for `stackql-deploy`. You can run the lint check as follows:
+
+```bash
+ruff check .
+```
+
+Note: If you need to install ruff, you can do so with `pip install ruff`.
+
+## Contributing
+
+Contributions are welcome and encouraged.
+
+## License
+
+Distributed under the MIT License. See [`LICENSE`](https://github.com/stackql/stackql-deploy/blob/main/LICENSE) for more information.
+
+## Contact
+
+Get in touch with us via Twitter at [**@stackql**][twitter], email us at [**info@stackql.io**](info@stackql.io) or start a conversation using [**discussions**][discussions].
diff --git a/README.rst b/README.rst
index f74d8aa..f64568c 100644
--- a/README.rst
+++ b/README.rst
@@ -1,324 +1,354 @@
-.. image:: https://stackql.io/img/stackql-deploy-logo.png
- :alt: "stackql-deploy logo"
- :target: https://github.com/stackql/stackql
- :align: center
-
-==========================================================================
-Model driven resource provisioning and deployment framework using StackQL.
-==========================================================================
-
-.. image:: https://img.shields.io/pypi/v/stackql-deploy
- :target: https://pypi.org/project/stackql-deploy/
- :alt: PyPI
-
-.. image:: https://img.shields.io/pypi/dm/stackql-deploy
- :target: https://pypi.org/project/stackql-deploy/
- :alt: PyPI - Downloads
-
-.. image:: https://img.shields.io/badge/documentation-%F0%9F%93%96-brightgreen.svg
- :target: https://stackql-deploy.io/docs
- :alt: Documentation
-
-==============
-
-**stackql-deploy** is a multi-cloud Infrastructure as Code (IaC) framework using `stackql`_, inspired by dbt (data build tool), which manages data transformation workflows in analytics engineering by treating SQL scripts as models that can be built, tested, and materialized incrementally. You can create a similar framework for infrastructure provisioning with StackQL. The goal is to treat infrastructure-as-code (IaC) queries as models that can be deployed, managed, and interconnected.
-
-This ELT/model-based framework to IaC allows you to provision, test, update and teardown multi-cloud stacks similar to how dbt manages data transformation projects, with the benefits of version control, peer review, and automation. This approach enables you to deploy complex, dependent infrastructure components in a reliable and repeatable manner.
-
-The use of StackQL simplifies the interaction with cloud resources by using SQL-like syntax, making it easier to define and execute complex cloud management operations. Resources are provisioned with ``INSERT`` statements and tests are structured around ``SELECT`` statements.
-
-Features include:
-
-- Dynamic state determination (eliminating the need for state files)
-- Simple flow control with rollback capabilities
-- Single code base for multiple target environments
-- SQL-based definitions for resources and tests
-
-How stackql-deploy Works
-------------------------
-
-**stackql-deploy** orchestrates cloud resource provisioning by parsing SQL-like definitions. It determines the necessity of creating or updating resources based on exists checks, and ensures the creation and correct desired configuration through post-deployment verifications.
-
-.. image:: https://stackql.io/img/blog/stackql-deploy.png
- :alt: "stackql-deploy"
- :target: https://github.com/stackql/stackql
-
-Installing from PyPI
---------------------
-
-To install **stackql-deploy** directly from PyPI, run the following command:
-
-.. code-block:: bash
-
- pip install stackql-deploy
-
-This will install the latest version of **stackql-deploy** and its dependencies from the Python Package Index.
-
-.. note::
-
- **Note for macOS users**: to install `stackql-deploy` in a virtual environment (which may be necessary on **macOS**), use the following:
-
- .. code-block:: bash
-
- python3 -m venv myenv
- source myenv/bin/activate
- pip install stackql-deploy
-
-Running stackql-deploy
-----------------------
-
-Once installed, use the `build`, `test`, or `teardown` commands as shown here:
-
-.. code-block:: none
-
- stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000 --dry-run
- stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
- stackql-deploy test prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
- stackql-deploy teardown prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
-
-.. note::
- ``teardown`` deprovisions resources in reverse order to creation
-
-additional options include:
-
-- ``--dry-run``: perform a dry run of the stack operations.
-- ``--on-failure=rollback``: action on failure: rollback, ignore or error.
-- ``--env-file=.env``: specify an environment variable file.
-- ``-e KEY=value```: pass additional environment variables.
-- ``--log-level`` : logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL), defaults to INFO.
-
-use ``stackql-deploy info`` to show information about the package and environment, for example
-
-.. code-block:: none
-
- $ stackql-deploy info
- stackql-deploy version: 1.0.0
- pystackql version : 3.5.4
- stackql version : v0.5.612
- stackql binary path : /mnt/c/LocalGitRepos/stackql/stackql-deploy/stackql
- platform : Linux x86_64 (Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35), Python 3.10.12
-
-Use the ``--help`` option to see more information about the commands and options available:
-
-.. code-block:: none
-
- stackql-deploy --help
-
-Project Structure
------------------
-
-**stackql-deploy** uses a modular structure where each component of the infrastructure is defined in separate files, allowing for clear separation of concerns and easy management. This example is based on a stack named ``example_stack``, with a resource named ``monitor_resource_group``.
-
-::
-
- ├── example_stack
- │ ├── stackql_manifest.yml
- │ └── resources
- │ └── monitor_resource_group.iql
-
-.. note::
- use the ``init`` command to create a new project structure with sample files, for example ``stackql-deploy init example_stack``
-
-Manifest File
--------------
-
-- **Manifest File**: The ``stackql_manifest.yml`` is used to define your stack and manage dependencies between infrastructure components. This file defines which resources need to be provisioned before others and parameterizes resources based on environment variables or other configurations.
-
-- **Providers**: List the cloud service providers that your stack will interact with. Each provider specified in the list will be initialized and made ready for use with the stack.
-
- .. code-block:: yaml
-
- providers:
- - azure
- - github
-
-- **Globals**: Defines a set of global variables that can be used across the entire stack configuration. These variables can hold values related to environment settings, default configurations, or any commonly used data.
-
- .. code-block:: yaml
-
- globals:
- - name: subscription_id
- description: azure subscription id
- value: "{{ vars.AZURE_SUBSCRIPTION_ID }}"
- - name: location
- value: eastus
- ... (additional globals)
-
-- **Resources**: Describes all the infrastructure components, such as networks, compute instances, databases, etc., that make up your stack. Here you can define the resources, their properties, and any dependencies between them.
-
- .. code-block:: yaml
-
- resources:
- - name: resource_group
- description: azure resource group for activity monitor app
- - name: storage_account
- description: azure storage account for activity monitor app
- ... (additional properties and exports)
- ...
-
- Each resource can have the following attributes:
-
- - **Name**: A unique identifier for the resource within the stack.
- - **Description**: A brief explanation of the resource's purpose and functionality.
- - **Type**: (Optional) Specifies the kind of resource (e.g., 'resource', 'query', 'script').
- - **Props**: (Optional) Lists the properties of the resource that define its configuration.
- - **Exports**: (Optional) Variables that are exported by this resource which can be used by other resources.
- - **Protected**: (Optional) A list of sensitive information that should not be logged or exposed outside secure contexts.
-
-- **Scripts**: If your stack involves the execution of scripts for setup, data manipulation, or deployment actions, they are defined under the resources with a type of 'script'.
-
- .. code-block:: yaml
-
- - name: install_dependencies
- type: script
- run: |
- pip install pynacl
- ...
-
- The script's execution output can be captured and used within the stack or for further processing.
-
-- **Integration with External Systems**: For stacks that interact with external services like GitHub, special resource types like 'query' can be used to fetch data from these services and use it within your deployment.
-
- .. code-block:: yaml
-
- - name: get_github_public_key
- type: query
- ... (additional properties and exports)
-
- This can be useful for dynamic configurations based on external state or metadata.
-
-Resource and Test SQL Files
-----------------------------
-
-These files define the SQL-like commands for creating, updating, and testing the deployment of resources.
-
-.. note::
- The SQL files use special **anchors** to indicate operations such as create, update, delete for resources,
- and exists or post-deployment checks for queries. For detailed explanations of these anchors, refer to the
- `Resource SQL Anchors`_ and `Query SQL Anchors`_ sections.
-
-**Resource SQL (resources/monitor_resource_group.iql):**
-
-.. code-block:: sql
-
- /*+ create */
- INSERT INTO azure.resources.resource_groups(
- resourceGroupName,
- subscriptionId,
- data__location
- )
- SELECT
- '{{ resource_group_name }}',
- '{{ subscription_id }}',
- '{{ location }}'
-
- /*+ update */
- UPDATE azure.resources.resource_groups
- SET data__location = '{{ location }}'
- WHERE resourceGroupName = '{{ resource_group_name }}'
- AND subscriptionId = '{{ subscription_id }}'
-
- /*+ delete */
- DELETE FROM azure.resources.resource_groups
- WHERE resourceGroupName = '{{ resource_group_name }}' AND subscriptionId = '{{ subscription_id }}'
-
-**Test SQL (resources/monitor_resource_group.iql):**
-
-.. code-block:: sql
-
- /*+ exists */
- SELECT COUNT(*) as count FROM azure.storage.accounts
- WHERE SPLIT_PART(SPLIT_PART(JSON_EXTRACT(properties, '$.primaryEndpoints.blob'), '//', 2), '.', 1) = '{{ storage_account_name }}'
- AND subscriptionId = '{{ subscription_id }}'
- AND resourceGroupName = '{{ resource_group_name }}'
-
- /*+ statecheck, retries=5, retry_delay=5 */
- SELECT
- COUNT(*) as count
- FROM azure.storage.accounts
- WHERE SPLIT_PART(SPLIT_PART(JSON_EXTRACT(properties, '$.primaryEndpoints.blob'), '//', 2), '.', 1) = '{{ storage_account_name }}'
- AND subscriptionId = '{{ subscription_id }}'
- AND resourceGroupName = '{{ resource_group_name }}'
- AND kind = '{{ storage_kind }}'
- AND JSON_EXTRACT(sku, '$.name') = 'Standard_LRS'
- AND JSON_EXTRACT(sku, '$.tier') = 'Standard'
-
- /*+ exports, retries=5, retry_delay=5 */
- select json_extract(keys, '$[0].value') as storage_account_key
- from azure.storage.accounts_keys
- WHERE resourceGroupName = '{{ resource_group_name }}'
- AND subscriptionId = '{{ subscription_id }}'
- AND accountName = '{{ storage_account_name }}'
-
-
-Resource SQL Anchors
---------------------
-
-Resource SQL files use special anchor comments as directives for the ``stackql-deploy`` tool to indicate the intended operations:
-
-- **/*+ create */**
- This anchor precedes SQL ``INSERT`` statements for creating new resources.
-
- .. code-block:: sql
-
- /*+ create */
- INSERT INTO azure.resources.resource_groups(
- resourceGroupName,
- subscriptionId,
- data__location
- )
- SELECT
- '{{ resource_group_name }}',
- '{{ subscription_id }}',
- '{{ location }}'
-
-- **/*+ createorupdate */**
- Specifies an operation to either create a new resource or update an existing one.
-
-- **/*+ update */**
- Marks SQL ``UPDATE`` statements intended to modify existing resources.
-
-- **/*+ delete */**
- Tags SQL ``DELETE`` statements for removing resources from the environment.
-
-Query SQL Anchors
------------------
-
-Query SQL files contain SQL statements for testing and validation with the following anchors:
-
-- **/*+ exists */**
- Used to perform initial checks before a deployment.
-
- .. code-block:: sql
-
- /*+ exists */
- SELECT COUNT(*) as count FROM azure.resources.resource_groups
- WHERE subscriptionId = '{{ subscription_id }}'
- AND resourceGroupName = '{{ resource_group_name }}'
-
-- **/*+ statecheck, retries=5, retry_delay=5 */**
- Post-deployment checks to confirm the success of the operation, with optional ``retries`` and ``retry_delay`` parameters.
-
- .. code-block:: sql
-
- /*+ statecheck, retries=5, retry_delay=5 */
- SELECT COUNT(*) as count FROM azure.resources.resource_groups
- WHERE subscriptionId = '{{ subscription_id }}'
- AND resourceGroupName = '{{ resource_group_name }}'
- AND location = '{{ location }}'
- AND JSON_EXTRACT(properties, '$.provisioningState') = 'Succeeded'
-
-- **/*+ exports, retries=5, retry_delay=5 */**
- Extracts and exports information after a deployment. Similar to post-deploy checks but specifically for exporting data.
-
-
-.. note::
- The following parameters are used to control the behavior of retry mechanisms in SQL operations:
-
- - **``retries``** (optional, integer): Defines the number of times a query should be retried upon failure.
- - **``retry_delay``** (optional, integer): Sets the delay in seconds between each retry attempt.
-
-
-**stackql-deploy** simplifies cloud resource management by treating infrastructure as flexible, dynamically assessed code.
-
-.. _stackql: https://github.com/stackql/stackql
+.. important::
+
+ **This repository is archived.**
+
+ The Python implementation of ``stackql-deploy`` has been superseded by a full Rust rewrite
+ released as version 2.x. The project has moved - nothing has been deleted, and all existing
+ functionality is available in the new version.
+
+ **Go here instead:**
+
+ - **GitHub**: https://github.com/stackql/stackql-deploy-rs
+ - **crates.io**: https://crates.io/crates/stackql-deploy
+ - **Docs and website**: https://stackql-deploy.io/
+
+ Install the new version with:
+
+ .. code-block:: bash
+
+ cargo install stackql-deploy
+
+ The ``.iql`` resource files and ``stackql_manifest.yml`` format are compatible with v2.x.
+ See the `new repo `_ for migration notes.
+
+----
+
+Archive notice: what follows is the original README for the Python (v1.x) implementation,
+preserved for reference. This package on PyPI will no longer receive updates.
+
+----
+
+.. image:: https://stackql.io/img/stackql-deploy-logo.png
+ :alt: "stackql-deploy logo"
+ :target: https://github.com/stackql/stackql
+ :align: center
+
+==========================================================================
+Model driven resource provisioning and deployment framework using StackQL.
+==========================================================================
+
+.. image:: https://img.shields.io/pypi/v/stackql-deploy
+ :target: https://pypi.org/project/stackql-deploy/
+ :alt: PyPI
+
+.. image:: https://img.shields.io/pypi/dm/stackql-deploy
+ :target: https://pypi.org/project/stackql-deploy/
+ :alt: PyPI - Downloads
+
+.. image:: https://img.shields.io/badge/documentation-%F0%9F%93%96-brightgreen.svg
+ :target: https://stackql-deploy.io/docs
+ :alt: Documentation
+
+==============
+
+**stackql-deploy** is a multi-cloud Infrastructure as Code (IaC) framework using `stackql`_, inspired by dbt (data build tool), which manages data transformation workflows in analytics engineering by treating SQL scripts as models that can be built, tested, and materialized incrementally. You can create a similar framework for infrastructure provisioning with StackQL. The goal is to treat infrastructure-as-code (IaC) queries as models that can be deployed, managed, and interconnected.
+
+This ELT/model-based framework to IaC allows you to provision, test, update and teardown multi-cloud stacks similar to how dbt manages data transformation projects, with the benefits of version control, peer review, and automation. This approach enables you to deploy complex, dependent infrastructure components in a reliable and repeatable manner.
+
+The use of StackQL simplifies the interaction with cloud resources by using SQL-like syntax, making it easier to define and execute complex cloud management operations. Resources are provisioned with ``INSERT`` statements and tests are structured around ``SELECT`` statements.
+
+Features include:
+
+- Dynamic state determination (eliminating the need for state files)
+- Simple flow control with rollback capabilities
+- Single code base for multiple target environments
+- SQL-based definitions for resources and tests
+
+How stackql-deploy Works
+------------------------
+
+**stackql-deploy** orchestrates cloud resource provisioning by parsing SQL-like definitions. It determines the necessity of creating or updating resources based on exists checks, and ensures the creation and correct desired configuration through post-deployment verifications.
+
+.. image:: https://stackql.io/img/blog/stackql-deploy.png
+ :alt: "stackql-deploy"
+ :target: https://github.com/stackql/stackql
+
+Installing from PyPI
+--------------------
+
+To install **stackql-deploy** directly from PyPI, run the following command:
+
+.. code-block:: bash
+
+ pip install stackql-deploy
+
+This will install the latest version of **stackql-deploy** and its dependencies from the Python Package Index.
+
+.. note::
+
+ **Note for macOS users**: to install `stackql-deploy` in a virtual environment (which may be necessary on **macOS**), use the following:
+
+ .. code-block:: bash
+
+ python3 -m venv myenv
+ source myenv/bin/activate
+ pip install stackql-deploy
+
+Running stackql-deploy
+----------------------
+
+Once installed, use the `build`, `test`, or `teardown` commands as shown here:
+
+.. code-block:: none
+
+ stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000 --dry-run
+ stackql-deploy build prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
+ stackql-deploy test prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
+ stackql-deploy teardown prd example_stack -e AZURE_SUBSCRIPTION_ID 00000000-0000-0000-0000-000000000000
+
+.. note::
+ ``teardown`` deprovisions resources in reverse order to creation
+
+additional options include:
+
+- ``--dry-run``: perform a dry run of the stack operations.
+- ``--on-failure=rollback``: action on failure: rollback, ignore or error.
+- ``--env-file=.env``: specify an environment variable file.
+- ``-e KEY=value```: pass additional environment variables.
+- ``--log-level`` : logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL), defaults to INFO.
+
+use ``stackql-deploy info`` to show information about the package and environment, for example
+
+.. code-block:: none
+
+ $ stackql-deploy info
+ stackql-deploy version: 1.0.0
+ pystackql version : 3.5.4
+ stackql version : v0.5.612
+ stackql binary path : /mnt/c/LocalGitRepos/stackql/stackql-deploy/stackql
+ platform : Linux x86_64 (Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35), Python 3.10.12
+
+Use the ``--help`` option to see more information about the commands and options available:
+
+.. code-block:: none
+
+ stackql-deploy --help
+
+Project Structure
+-----------------
+
+**stackql-deploy** uses a modular structure where each component of the infrastructure is defined in separate files, allowing for clear separation of concerns and easy management. This example is based on a stack named ``example_stack``, with a resource named ``monitor_resource_group``.
+
+::
+
+ ├── example_stack
+ │ ├── stackql_manifest.yml
+ │ └── resources
+ │ └── monitor_resource_group.iql
+
+.. note::
+ use the ``init`` command to create a new project structure with sample files, for example ``stackql-deploy init example_stack``
+
+Manifest File
+-------------
+
+- **Manifest File**: The ``stackql_manifest.yml`` is used to define your stack and manage dependencies between infrastructure components. This file defines which resources need to be provisioned before others and parameterizes resources based on environment variables or other configurations.
+
+- **Providers**: List the cloud service providers that your stack will interact with. Each provider specified in the list will be initialized and made ready for use with the stack.
+
+ .. code-block:: yaml
+
+ providers:
+ - azure
+ - github
+
+- **Globals**: Defines a set of global variables that can be used across the entire stack configuration. These variables can hold values related to environment settings, default configurations, or any commonly used data.
+
+ .. code-block:: yaml
+
+ globals:
+ - name: subscription_id
+ description: azure subscription id
+ value: "{{ vars.AZURE_SUBSCRIPTION_ID }}"
+ - name: location
+ value: eastus
+ ... (additional globals)
+
+- **Resources**: Describes all the infrastructure components, such as networks, compute instances, databases, etc., that make up your stack. Here you can define the resources, their properties, and any dependencies between them.
+
+ .. code-block:: yaml
+
+ resources:
+ - name: resource_group
+ description: azure resource group for activity monitor app
+ - name: storage_account
+ description: azure storage account for activity monitor app
+ ... (additional properties and exports)
+ ...
+
+ Each resource can have the following attributes:
+
+ - **Name**: A unique identifier for the resource within the stack.
+ - **Description**: A brief explanation of the resource's purpose and functionality.
+ - **Type**: (Optional) Specifies the kind of resource (e.g., 'resource', 'query', 'script').
+ - **Props**: (Optional) Lists the properties of the resource that define its configuration.
+ - **Exports**: (Optional) Variables that are exported by this resource which can be used by other resources.
+ - **Protected**: (Optional) A list of sensitive information that should not be logged or exposed outside secure contexts.
+
+- **Scripts**: If your stack involves the execution of scripts for setup, data manipulation, or deployment actions, they are defined under the resources with a type of 'script'.
+
+ .. code-block:: yaml
+
+ - name: install_dependencies
+ type: script
+ run: |
+ pip install pynacl
+ ...
+
+ The script's execution output can be captured and used within the stack or for further processing.
+
+- **Integration with External Systems**: For stacks that interact with external services like GitHub, special resource types like 'query' can be used to fetch data from these services and use it within your deployment.
+
+ .. code-block:: yaml
+
+ - name: get_github_public_key
+ type: query
+ ... (additional properties and exports)
+
+ This can be useful for dynamic configurations based on external state or metadata.
+
+Resource and Test SQL Files
+----------------------------
+
+These files define the SQL-like commands for creating, updating, and testing the deployment of resources.
+
+.. note::
+ The SQL files use special **anchors** to indicate operations such as create, update, delete for resources,
+ and exists or post-deployment checks for queries. For detailed explanations of these anchors, refer to the
+ `Resource SQL Anchors`_ and `Query SQL Anchors`_ sections.
+
+**Resource SQL (resources/monitor_resource_group.iql):**
+
+.. code-block:: sql
+
+ /*+ create */
+ INSERT INTO azure.resources.resource_groups(
+ resourceGroupName,
+ subscriptionId,
+ data__location
+ )
+ SELECT
+ '{{ resource_group_name }}',
+ '{{ subscription_id }}',
+ '{{ location }}'
+
+ /*+ update */
+ UPDATE azure.resources.resource_groups
+ SET data__location = '{{ location }}'
+ WHERE resourceGroupName = '{{ resource_group_name }}'
+ AND subscriptionId = '{{ subscription_id }}'
+
+ /*+ delete */
+ DELETE FROM azure.resources.resource_groups
+ WHERE resourceGroupName = '{{ resource_group_name }}' AND subscriptionId = '{{ subscription_id }}'
+
+**Test SQL (resources/monitor_resource_group.iql):**
+
+.. code-block:: sql
+
+ /*+ exists */
+ SELECT COUNT(*) as count FROM azure.storage.accounts
+ WHERE SPLIT_PART(SPLIT_PART(JSON_EXTRACT(properties, '$.primaryEndpoints.blob'), '//', 2), '.', 1) = '{{ storage_account_name }}'
+ AND subscriptionId = '{{ subscription_id }}'
+ AND resourceGroupName = '{{ resource_group_name }}'
+
+ /*+ statecheck, retries=5, retry_delay=5 */
+ SELECT
+ COUNT(*) as count
+ FROM azure.storage.accounts
+ WHERE SPLIT_PART(SPLIT_PART(JSON_EXTRACT(properties, '$.primaryEndpoints.blob'), '//', 2), '.', 1) = '{{ storage_account_name }}'
+ AND subscriptionId = '{{ subscription_id }}'
+ AND resourceGroupName = '{{ resource_group_name }}'
+ AND kind = '{{ storage_kind }}'
+ AND JSON_EXTRACT(sku, '$.name') = 'Standard_LRS'
+ AND JSON_EXTRACT(sku, '$.tier') = 'Standard'
+
+ /*+ exports, retries=5, retry_delay=5 */
+ select json_extract(keys, '$[0].value') as storage_account_key
+ from azure.storage.accounts_keys
+ WHERE resourceGroupName = '{{ resource_group_name }}'
+ AND subscriptionId = '{{ subscription_id }}'
+ AND accountName = '{{ storage_account_name }}'
+
+
+Resource SQL Anchors
+--------------------
+
+Resource SQL files use special anchor comments as directives for the ``stackql-deploy`` tool to indicate the intended operations:
+
+- **/*+ create */**
+ This anchor precedes SQL ``INSERT`` statements for creating new resources.
+
+ .. code-block:: sql
+
+ /*+ create */
+ INSERT INTO azure.resources.resource_groups(
+ resourceGroupName,
+ subscriptionId,
+ data__location
+ )
+ SELECT
+ '{{ resource_group_name }}',
+ '{{ subscription_id }}',
+ '{{ location }}'
+
+- **/*+ createorupdate */**
+ Specifies an operation to either create a new resource or update an existing one.
+
+- **/*+ update */**
+ Marks SQL ``UPDATE`` statements intended to modify existing resources.
+
+- **/*+ delete */**
+ Tags SQL ``DELETE`` statements for removing resources from the environment.
+
+Query SQL Anchors
+-----------------
+
+Query SQL files contain SQL statements for testing and validation with the following anchors:
+
+- **/*+ exists */**
+ Used to perform initial checks before a deployment.
+
+ .. code-block:: sql
+
+ /*+ exists */
+ SELECT COUNT(*) as count FROM azure.resources.resource_groups
+ WHERE subscriptionId = '{{ subscription_id }}'
+ AND resourceGroupName = '{{ resource_group_name }}'
+
+- **/*+ statecheck, retries=5, retry_delay=5 */**
+ Post-deployment checks to confirm the success of the operation, with optional ``retries`` and ``retry_delay`` parameters.
+
+ .. code-block:: sql
+
+ /*+ statecheck, retries=5, retry_delay=5 */
+ SELECT COUNT(*) as count FROM azure.resources.resource_groups
+ WHERE subscriptionId = '{{ subscription_id }}'
+ AND resourceGroupName = '{{ resource_group_name }}'
+ AND location = '{{ location }}'
+ AND JSON_EXTRACT(properties, '$.provisioningState') = 'Succeeded'
+
+- **/*+ exports, retries=5, retry_delay=5 */**
+ Extracts and exports information after a deployment. Similar to post-deploy checks but specifically for exporting data.
+
+
+.. note::
+ The following parameters are used to control the behavior of retry mechanisms in SQL operations:
+
+ - **``retries``** (optional, integer): Defines the number of times a query should be retried upon failure.
+ - **``retry_delay``** (optional, integer): Sets the delay in seconds between each retry attempt.
+
+
+**stackql-deploy** simplifies cloud resource management by treating infrastructure as flexible, dynamically assessed code.
+
+.. _stackql: https://github.com/stackql/stackql
diff --git a/examples/aws/patch-doc-test/resources/bucket1.iql b/examples/aws/patch-doc-test/resources/bucket1.iql
index b11970b..ee39d81 100644
--- a/examples/aws/patch-doc-test/resources/bucket1.iql
+++ b/examples/aws/patch-doc-test/resources/bucket1.iql
@@ -17,7 +17,7 @@ SELECT
'{{ bucket1_tags }}',
'{{ region }}'
-/*+ statecheck, retries=2, retry_delay=1 */
+/*+ statecheck, retries=5, retry_delay=2 */
SELECT COUNT(*) as count FROM
(
SELECT
@@ -26,7 +26,8 @@ FROM aws.s3.buckets
WHERE region = '{{ region }}'
AND data__Identifier = '{{ bucket1_name }}'
) t
-WHERE test_versioning_config = 1;
+WHERE test_versioning_config = 1
+;
/*+ exports, retries=2, retry_delay=1 */
SELECT bucket_name as bucket1_name, arn as bucket1_arn FROM
diff --git a/setup.py b/setup.py
index e5cea3c..c07fbeb 100644
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
setup(
name='stackql-deploy',
- version='1.9.2',
+ version='1.9.4',
description='Model driven resource provisioning and deployment framework using StackQL.',
long_description=readme,
long_description_content_type='text/x-rst',
diff --git a/stackql_deploy/__init__.py b/stackql_deploy/__init__.py
index 8e2b7ed..b0bb384 100644
--- a/stackql_deploy/__init__.py
+++ b/stackql_deploy/__init__.py
@@ -1 +1 @@
-__version__ = '1.9.2'
+__version__ = '1.9.4'
diff --git a/stackql_deploy/cmd/build.py b/stackql_deploy/cmd/build.py
index 6805059..026f3be 100644
--- a/stackql_deploy/cmd/build.py
+++ b/stackql_deploy/cmd/build.py
@@ -149,6 +149,8 @@ def run(self, dry_run, show_queries, on_failure, output_file=None):
self.logger
)
+ exports_result_from_proxy = None # Track exports result if used as proxy
+
if type in ('resource', 'multi'):
ignore_errors = False
@@ -159,44 +161,101 @@ def run(self, dry_run, show_queries, on_failure, output_file=None):
ignore_errors = True
#
- # OPTIMIZED exists and state check - try exports first for happy path
+ # State checking logic
#
- exports_result_from_proxy = None # Track exports result if used as proxy
if createorupdate_query:
+ # Skip all existence and state checks for createorupdate
pass
else:
- # OPTIMIZATION: Try exports first if available for one-query solution
- if exports_query:
+ # Determine the validation strategy based on available queries
+ if statecheck_query:
+ #
+ # Flow 1: Traditional flow when statecheck exists
+ # exists → create/update → statecheck → exports
+ #
+ if exists_query:
+ resource_exists = self.check_if_resource_exists(
+ resource_exists,
+ resource,
+ full_context,
+ exists_query,
+ exists_retries,
+ exists_retry_delay,
+ dry_run,
+ show_queries
+ )
+ else:
+ # Use statecheck as exists check
+ is_correct_state = self.check_if_resource_is_correct_state(
+ is_correct_state,
+ resource,
+ full_context,
+ statecheck_query,
+ statecheck_retries,
+ statecheck_retry_delay,
+ dry_run,
+ show_queries
+ )
+ resource_exists = is_correct_state
+
+ # Pre-deployment state check for existing resources
+ if resource_exists and not is_correct_state:
+ if resource.get('skip_validation', False):
+ self.logger.info(
+ f"skipping validation for [{resource['name']}] as skip_validation is set to true."
+ )
+ is_correct_state = True
+ else:
+ is_correct_state = self.check_if_resource_is_correct_state(
+ is_correct_state,
+ resource,
+ full_context,
+ statecheck_query,
+ statecheck_retries,
+ statecheck_retry_delay,
+ dry_run,
+ show_queries
+ )
+
+ elif exports_query:
+ #
+ # Flow 2: Optimized flow when only exports exists (no statecheck)
+ # Try exports first with FAST FAIL (no retries)
+ # If fails: exists → create/update → exports (with retries as statecheck)
+ #
self.logger.info(
- f"🔄 trying exports query first for optimal single-query validation "
+ f"🔄 trying exports query first (fast-fail) for optimal validation "
f"for [{resource['name']}]"
)
is_correct_state, exports_result_from_proxy = self.check_state_using_exports_proxy(
resource,
full_context,
exports_query,
- exports_retries,
- exports_retry_delay,
+ 1, # Fast fail: only 1 attempt
+ 0, # No delay
dry_run,
show_queries
)
resource_exists = is_correct_state
- # If exports succeeded, we're done with validation for happy path
+ # If exports succeeded, we're done with validation (happy path)
if is_correct_state:
self.logger.info(
- f"✅ [{resource['name']}] validated successfully with single exports query"
+ f"✅ [{resource['name']}] validated successfully with fast exports query"
)
else:
- # If exports failed, fall back to traditional exists check
+ # Exports failed, fall back to exists check
self.logger.info(
- f"📋 exports validation failed, falling back to exists check "
+ f"📋 fast exports validation failed, falling back to exists check "
f"for [{resource['name']}]"
)
+ # Clear the failed exports result
+ exports_result_from_proxy = None
+
if exists_query:
resource_exists = self.check_if_resource_exists(
- False, # Reset this since exports failed
+ False,
resource,
full_context,
exists_query,
@@ -205,23 +264,14 @@ def run(self, dry_run, show_queries, on_failure, output_file=None):
dry_run,
show_queries
)
- elif statecheck_query:
- # statecheck can be used as an exists check fallback
- is_correct_state = self.check_if_resource_is_correct_state(
- False, # Reset this
- resource,
- full_context,
- statecheck_query,
- statecheck_retries,
- statecheck_retry_delay,
- dry_run,
- show_queries
- )
- resource_exists = is_correct_state
- # Reset is_correct_state since we need to re-validate after create/update
- is_correct_state = False
+ else:
+ # No exists query, assume resource doesn't exist
+ resource_exists = False
+
elif exists_query:
- # Traditional path: exports not available, use exists
+ #
+ # Flow 3: Basic flow with only exists query
+ #
resource_exists = self.check_if_resource_exists(
resource_exists,
resource,
@@ -232,59 +282,12 @@ def run(self, dry_run, show_queries, on_failure, output_file=None):
dry_run,
show_queries
)
- elif statecheck_query:
- # statecheck can be used as an exists check
- is_correct_state = self.check_if_resource_is_correct_state(
- is_correct_state,
- resource,
- full_context,
- statecheck_query,
- statecheck_retries,
- statecheck_retry_delay,
- dry_run,
- show_queries
- )
- resource_exists = is_correct_state
else:
catch_error_and_exit(
"iql file must include either 'exists', 'statecheck', or 'exports' anchor.",
self.logger
)
- #
- # state check with optimizations (only if we haven't already validated via exports)
- #
- if resource_exists and not is_correct_state and exports_result_from_proxy is None:
- # bypass state check if skip_validation is set to true
- if resource.get('skip_validation', False):
- self.logger.info(
- f"skipping validation for [{resource['name']}] as skip_validation is set to true."
- )
- is_correct_state = True
- elif statecheck_query:
- is_correct_state = self.check_if_resource_is_correct_state(
- is_correct_state,
- resource,
- full_context,
- statecheck_query,
- statecheck_retries,
- statecheck_retry_delay,
- dry_run,
- show_queries
- )
- elif exports_query:
- # This shouldn't happen since we tried exports first, but keeping for safety
- self.logger.info(f"🔄 using exports query as proxy for statecheck for [{resource['name']}]")
- is_correct_state, _ = self.check_state_using_exports_proxy(
- resource,
- full_context,
- exports_query,
- exports_retries,
- exports_retry_delay,
- dry_run,
- show_queries
- )
-
#
# resource does not exist
#
@@ -319,10 +322,11 @@ def run(self, dry_run, show_queries, on_failure, output_file=None):
)
#
- # check state again after create or update with optimizations
+ # check state again after create or update
#
if is_created_or_updated:
if statecheck_query:
+ # Use statecheck for post-deploy validation
is_correct_state = self.check_if_resource_is_correct_state(
is_correct_state,
resource,
@@ -334,17 +338,23 @@ def run(self, dry_run, show_queries, on_failure, output_file=None):
show_queries,
)
elif exports_query:
- # OPTIMIZATION: Use exports as statecheck proxy for post-deploy validation
+ # Use exports as statecheck proxy with proper retries
+ # This handles the case where statecheck doesn't exist
self.logger.info(
- f"🔄 using exports query as proxy for post-deploy statecheck "
+ f"🔄 using exports query as post-deploy statecheck "
f"for [{resource['name']}]"
)
- is_correct_state, _ = self.check_state_using_exports_proxy(
+ # Need to determine retries: if we have statecheck config, use it
+ # Otherwise fall back to exports config
+ post_deploy_retries = statecheck_retries if statecheck_retries > 1 else exports_retries
+ post_deploy_delay = statecheck_retry_delay if statecheck_retries > 1 else exports_retry_delay
+
+ is_correct_state, exports_result_from_proxy = self.check_state_using_exports_proxy(
resource,
full_context,
exports_query,
- exports_retries,
- exports_retry_delay,
+ post_deploy_retries,
+ post_deploy_delay,
dry_run,
show_queries
)
diff --git a/stackql_deploy/cmd/test.py b/stackql_deploy/cmd/test.py
index 2cb2100..526e6e7 100644
--- a/stackql_deploy/cmd/test.py
+++ b/stackql_deploy/cmd/test.py
@@ -108,8 +108,8 @@ def run(self, dry_run, show_queries, on_failure, output_file=None):
resource,
full_context,
exports_query,
- exports_retries,
- exports_retry_delay,
+ statecheck_retries, # Use statecheck retries when using as statecheck proxy
+ statecheck_retry_delay, # Use statecheck delay when using as statecheck proxy
dry_run,
show_queries
)