Skip to content

Deploying your own Azure AI resources

You are following these instruction as you are provisioning your own Azure AI resources to run the workshop.

You will perform the following steps:

  1. Provision the Azure resources required for the workshop.
  2. Set up the workshop environment with GitHub Codespaces.
  3. Load grounding data.
  4. Configure the Prompt Flow connections.
  5. Proceed to the workshop.

Requirements

You require an Azure Subscription. If you don't have an Azure subscriptions, you can create a free subscription.

Step 1: Provisioning Azure resources

In this section you'll set up the Azure resources required for the workshop.

  1. Clone the contoso-chat repo.
  2. From the VS Code Dev Container or the virtual Python environment, run the ./provision.sh script.
  3. After the provisioning script completes, follow the instructions raised in this issue.

Step 2: Set up the workshop environment

To run the workshop with GitHub Codespaces, complete the following steps:

  1. Fork the Contoso Chat Proxy repository to your GitHub account.

  2. Open the forked repository in GitHub Codespaces. In the forked repository, select Code, then the Codespaces tab, and then select Create codespace on main. The GitHub Codespaces environment will be created for you. It will take approximately 5 minutes to create the environment.

    Warning

    When you have finished, be sure to stop the Codespace to avoid incurring unnecessary charges.

Step 3: Load grounding data

The solution uses two data sources to ground the LLM prompt. The first is an Azure AI Search index that contains product information. The second is a Cosmos DB database that holds customer information.

Load the product catalog

In this step you'll learn how to load product data into Azure AI Search. The product database will be used to ground the LLM prompt with product information.

Azure AI Search is a hybrid index service with support for keyword and semantic vector search. When data the is loaded into Azure AI Search, a keyword index is built for each product, and an embedding is generated for each product and is stored in a vector index.

Architecturally, a search service sits between the external data stores that contain your un-indexed data, and your client app that sends query requests to a search index and handles the response.

Follow these steps:

  1. Ensure you have the contoso-chat repo open in VS Code.
  2. Load the Product catalog
    • Open the data/product_info/products.csv file and review the contents.
    • Run the data/product_info/create-azure-search.ipynb Jupyter Notebook. The notebook will store product data and generated product description embeddings in Azure AI Search.
    • Switch to Azure AI Search in the Azure Portal
    • Review the AI Search resource and note that the semantic search feature is enabled.
    • Show the indexes:
      • Select the contoso-products index. There are 20 documents indexed.
      • Search the index - what tents can you recommend?. Azure AI Search will use the keyword index to return the product catalog results that most closely match the question.
      • Review the results and note the vector from the embedding is returned with each result.
      • The Prompt Flow will use the index to ground the LLM prompt.

Load the customer database

In this step you'll learn how to load customer data into Cosmos DB. The customer data will be used to ground the LLM prompt with customer order history.

  1. Run the /data/customer_info/create-cosmos-db.ipynb Jupyter Notebook to load the customer data into Cosmos DB.
  2. Navigate to Cosmos DB in Azure Portal and update one of the records with your name.
    • Select Data Explorer -> contoso-outdoor -> Customers -> Items.
    • Pick and item to update, the select Update.

Step 4: Configure the Prompt Flow connections

  1. Update the .env.sample file with the Azure AI Search and Cosmos DB connection strings.
  2. Configure the Prompt Flow connections:
    • Open the connections folder and open the create-connections.ipynb notebook.
    • You will be prompted to install the Jupyter extension, select Install.
    • Select Run All in the notebook to create the connections.

Step 5: Proceed to the workshop

  1. From VS Code, create a folder called workshop.
  2. Proceed to the Workshop section.