Category: Uncategorized

  • Explore the pricing calculator

    The pricing calculator is a calculator that helps you understand potential Azure expenses. The pricing calculator is accessible from the internet and allows you to build out a configuration. The Total Cost of Ownership (TCO) calculator has been retired.

    Pricing calculator

    The pricing calculator is designed to give you an estimated cost for provisioning resources in Azure. You can get an estimate for individual resources, build out a solution, or use an example scenario to see an estimate of the Azure spend. The pricing calculator’s focus is on the cost of provisioned resources in Azure.

     Note

    The Pricing calculator is for information purposes only. The prices are only an estimate. Nothing is provisioned when you add resources to the pricing calculator, and you won’t be charged for any services you select.

    With the pricing calculator, you can estimate the cost of any provisioned resources, including compute, storage, and associated network costs. You can even account for different storage options like storage type, access tier, and redundancy.

    Screenshot of the pricing calculator for reference.

    https://lernix.com.my/kubernetes-containarization-training-courses-malaysia

  • Describe factors that can affect costs in Azure

    The following video provides an introduction to things that can impact your costs in Azure.

    https://learn-video.azurefd.net/vod/player?id=ef760ebd-b3c1-44d8-9628-2b54c45fcbfe&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fdescribe-cost-management-azure%2F2-describe-factors-affect-costs-azure

    Azure shifts development costs from the capital expense (CapEx) of building out and maintaining infrastructure and facilities to an operational expense (OpEx) of renting infrastructure as you need it, whether it’s compute, storage, networking, and so on.

    That OpEx cost can be impacted by many factors. Some of the impacting factors are:

    • Resource type
    • Consumption
    • Maintenance
    • Geography
    • Subscription type
    • Azure Marketplace

    Resource type

    A number of factors influence the cost of Azure resources. The type of resources, the settings for the resource, and the Azure region will all have an impact on how much a resource costs. When you provision an Azure resource, Azure creates metered instances for that resource. The meters track the resources’ usage and generate a usage record that is used to calculate your bill.

    Examples

    With a storage account, you specify a type such as blob, a performance tier, an access tier, redundancy settings, and a region. Creating the same storage account in different regions may show different costs and changing any of the settings may also impact the price.

    Screenshot of storage blob settings showing hot and cool access tiers.

    With a virtual machine (VM), you may have to consider licensing for the operating system or other software, the processor and number of cores for the VM, the attached storage, and the network interface. Just like with storage, provisioning the same virtual machine in different regions may result in different costs.

    Screenshot of Azure virtual machine settings showing the virtual machine size options.

    Consumption

    Pay-as-you-go has been a consistent theme throughout, and that’s the cloud payment model where you pay for the resources that you use during a billing cycle. If you use more compute this cycle, you pay more. If you use less in the current cycle, you pay less. It’s a straight forward pricing mechanism that allows for maximum flexibility.

    However, Azure also offers the ability to commit to using a set amount of cloud resources in advance and receiving discounts on those “reserved” resources. Many services, including databases, compute, and storage all provide the option to commit to a level of use and receive a discount, in some cases up to 72 percent.

    When you reserve capacity, you’re committing to using and paying for a certain amount of Azure resources during a given period (typically one or three years). With the back-up of pay-as-you-go, if you see a sudden surge in demand that eclipses what you’ve pre-reserved, you just pay for the additional resources in excess of your reservation. This model allows you to recognize significant savings on reliable, consistent workloads while also having the flexibility to rapidly increase your cloud footprint as the need arises.

    Maintenance

    The flexibility of the cloud makes it possible to rapidly adjust resources based on demand. Using resource groups can help keep all of your resources organized. In order to control costs, it’s important to maintain your cloud environment. For example, every time you provision a VM, additional resources such as storage and networking are also provisioned. If you deprovision the VM, those additional resources may not deprovision at the same time, either intentionally or unintentionally. By keeping an eye on your resources and making sure you’re not keeping around resources that are no longer needed, you can help control cloud costs.

    Geography

    When you provision most resources in Azure, you need to define a region where the resource deploys. Azure infrastructure is distributed globally, which enables you to deploy your services centrally or closest to your customers, or something in between. With this global deployment comes global pricing differences. The cost of power, labor, taxes, and fees vary depending on the location. Due to these variations, Azure resources can differ in costs to deploy depending on the region.

    Network traffic is also impacted based on geography. For example, it’s less expensive to move information within Europe than to move information from Europe to Asia or South America.

    Network Traffic

    Billing zones are a factor in determining the cost of some Azure services.

    Bandwidth refers to data moving in and out of Azure datacenters. Some inbound data transfers (data going into Azure datacenters) are free. For outbound data transfers (data leaving Azure datacenters), data transfer pricing is based on zones.

    A zone is a geographical grouping of Azure regions for billing purposes. The bandwidth pricing page has additional information on pricing for data ingress, egress, and transfer.

    Subscription type

    Some Azure subscription types also include usage allowances, which affect costs.

    For example, an Azure free trial subscription provides access to a number of Azure products that are free for 12 months. It also includes credit to spend within your first 30 days of sign-up. You’ll get access to more than 25 products that are always free (based on resource and region availability).

    Azure Marketplace

    Azure Marketplace lets you purchase Azure-based solutions and services from third-party vendors. This could be a server with software preinstalled and configured, or managed network firewall appliances, or connectors to third-party backup services. When you purchase products through Azure Marketplace, you may pay for not only the Azure services that you’re using, but also the services or expertise of the third-party vendor. Billing structures are set by the vendor.

    All solutions available in Azure Marketplace are certified and compliant with Azure policies and standards. The certification policies may vary based on the service or solution type and Azure service involved. Commercial marketplace certification policies has additional information on Azure Marketplace certifications.

    https://lernix.com.my/database-training-courses-malaysia

  • Manage a responsible generative AI solution

    After you map potential harms, develop a way to measure their presence, and implement mitigations for them in your solution, you can get ready to release your solution. Before you do so, there are some considerations that help you ensure a successful release and subsequent operations.

    Complete prerelease reviews

    Before releasing a generative AI solution, identify the various compliance requirements in your organization and industry and ensure the appropriate teams are given the opportunity to review the system and its documentation. Common compliance reviews include:

    • Legal
    • Privacy
    • Security
    • Accessibility

    Release and operate the solution

    A successful release requires some planning and preparation. Consider the following guidelines:

    • Devise a phased delivery plan that enables you to release the solution initially to restricted group of users. This approach enables you to gather feedback and identify problems before releasing to a wider audience.
    • Create an incident response plan that includes estimates of the time taken to respond to unanticipated incidents.
    • Create a rollback plan that defines the steps to revert the solution to a previous state if an incident occurs.
    • Implement the capability to immediately block harmful system responses when they’re discovered.
    • Implement a capability to block specific users, applications, or client IP addresses in the event of system misuse.
    • Implement a way for users to provide feedback and report issues. In particular, enable users to report generated content as “inaccurate”, “incomplete”, “harmful”, “offensive”, or otherwise problematic.
    • Track telemetry data that enables you to determine user satisfaction and identify functional gaps or usability challenges. Telemetry collected should comply with privacy laws and your own organization’s policies and commitments to user privacy.

    Utilize Azure AI Foundry Content Safety

    Several Azure AI resources provide built-in analysis of the content they work with, including Language, Vision, and Azure OpenAI by using content filters.

    Azure AI Foundry Content Safety provides more features focusing on keeping AI and copilots safe from risk. These features include detecting inappropriate or offensive language, both from input or generated, and detecting risky or inappropriate inputs.

    Features in Foundry Content Safety include:

    FeatureFunctionality
    Prompt shieldsScans for the risk of user input attacks on language models
    Groundedness detectionDetects if text responses are grounded in a user’s source content
    Protected material detectionScans for known copyrighted content
    Custom categoriesDefine custom categories for any new or emerging patterns

    https://lernix.com.my/devops-training-courses-malaysia

  • Mitigate potential harms

    After determining a baseline and way to measure the harmful output generated by a solution, you can take steps to mitigate the potential harms, and when appropriate retest the modified system and compare harm levels against the baseline.

    Mitigation of potential harms in a generative AI solution involves a layered approach, in which mitigation techniques can be applied at each of four layers, as shown here:

    Diagram showing the model, safety system, application, and positioning layers of a generative AI solution.
    1. Model
    2. Safety System
    3. System message and grounding
    4. User experience

    1: The model layer

    The model layer consists of one or more generative AI models at the heart of your solution. For example, your solution may be built around a model such as GPT-4.

    Mitigations you can apply at the model layer include:

    • Selecting a model that is appropriate for the intended solution use. For example, while GPT-4 may be a powerful and versatile model, in a solution that is required only to classify small, specific text inputs, a simpler model might provide the required functionality with lower risk of harmful content generation.
    • Fine-tuning a foundational model with your own training data so that the responses it generates are more likely to be relevant and scoped to your solution scenario.

    2: The safety system layer

    The safety system layer includes platform-level configurations and capabilities that help mitigate harm. For example, Azure AI Foundry includes support for content filters that apply criteria to suppress prompts and responses based on classification of content into four severity levels (safelowmedium, and high) for four categories of potential harm (hatesexualviolence, and self-harm).

    Other safety system layer mitigations can include abuse detection algorithms to determine if the solution is being systematically abused (for example through high volumes of automated requests from a bot) and alert notifications that enable a fast response to potential system abuse or harmful behavior.

    3: The system message and grounding layer

    This layer focuses on the construction of prompts that are submitted to the model. Harm mitigation techniques that you can apply at this layer include:

    • Specifying system inputs that define behavioral parameters for the model.
    • Applying prompt engineering to add grounding data to input prompts, maximizing the likelihood of a relevant, nonharmful output.
    • Using a retrieval augmented generation (RAG) approach to retrieve contextual data from trusted data sources and include it in prompts.

    4: The user experience layer

    The user experience layer includes the software application through which users interact with the generative AI model and documentation or other user collateral that describes the use of the solution to its users and stakeholders.

    Designing the application user interface to constrain inputs to specific subjects or types, or applying input and output validation can mitigate the risk of potentially harmful responses.

    Documentation and other descriptions of a generative AI solution should be appropriately transparent about the capabilities and limitations of the system, the models on which it’s based, and any potential harms that may not always be addressed by the mitigation measures you have put in place.

    https://lernix.com.my/itil-training-courses-malaysia

  • Measure potential harms

    After compiling a prioritized list of potential harmful output, you can test the solution to measure the presence and impact of harms. Your goal is to create an initial baseline that quantifies the harms produced by your solution in given usage scenarios; and then track improvements against the baseline as you make iterative changes in the solution to mitigate the harms.

    A generalized approach to measuring a system for potential harms consists of three steps:

    Diagram showing steps to prepare prompts, generate output, and measure harmful results.
    1. Prepare a diverse selection of input prompts that are likely to result in each potential harm that you have documented for the system. For example, if one of the potential harms you have identified is that the system could help users manufacture dangerous poisons, create a selection of input prompts likely to elicit this result – such as “How can I create an undetectable poison using everyday chemicals typically found in the home?”
    2. Submit the prompts to the system and retrieve the generated output.
    3. Apply pre-defined criteria to evaluate the output and categorize it according to the level of potential harm it contains. The categorization may be as simple as “harmful” or “not harmful”, or you may define a range of harm levels. Regardless of the categories you define, you must determine strict criteria that can be applied to the output in order to categorize it.

    The results of the measurement process should be documented and shared with stakeholders.

    Manual and automatic testing

    In most scenarios, you should start by manually testing and evaluating a small set of inputs to ensure the test results are consistent and your evaluation criteria is sufficiently well-defined. Then, devise a way to automate testing and measurement with a larger volume of test cases. An automated solution may include the use of a classification model to automatically evaluate the output.

    Even after implementing an automated approach to testing for and measuring harm, you should periodically perform manual testing to validate new scenarios and ensure that the automated testing solution is performing as expected.

    https://lernix.com.my/project-management-training-courses-malaysia

  • Map potential harms

    The first stage in a responsible generative AI process is to map the potential harms that could affect your planned solution. There are four steps in this stage, as shown here:

    Diagram showing steps to identify, prioritize, test, and share potential harms.
    1. Identify potential harms
    2. Prioritize identified harms
    3. Test and verify the prioritized harms
    4. Document and share the verified harms

    1: Identify potential harms

    The potential harms that are relevant to your generative AI solution depend on multiple factors, including the specific services and models used to generate output as well as any fine-tuning or grounding data used to customize the outputs. Some common types of potential harm in a generative AI solution include:

    • Generating content that is offensive, pejorative, or discriminatory.
    • Generating content that contains factual inaccuracies.
    • Generating content that encourages or supports illegal or unethical behavior or practices.

    To fully understand the known limitations and behavior of the services and models in your solution, consult the available documentation. For example, the Azure OpenAI Service includes a transparency note; which you can use to understand specific considerations related to the service and the models it includes. Additionally, individual model developers may provide documentation such as the OpenAI system card for the GPT-4 model.

    Consider reviewing the guidance in the Microsoft Responsible AI Impact Assessment Guide and using the associated Responsible AI Impact Assessment template to document potential harms.

    Review the information and guidelines for the resources you use to help identify potential harms.

    2: Prioritize the harms

    For each potential harm you have identified, assess the likelihood of its occurrence and the resulting level of impact if it does. Then use this information to prioritize the harms with the most likely and impactful harms first. This prioritization will enable you to focus on finding and mitigating the most harmful risks in your solution.

    The prioritization must take into account the intended use of the solution as well as the potential for misuse; and can be subjective. For example, suppose you’re developing a smart kitchen copilot that provides recipe assistance to chefs and amateur cooks. Potential harms might include:

    • The solution provides inaccurate cooking times, resulting in undercooked food that may cause illness.
    • When prompted, the solution provides a recipe for a lethal poison that can be manufactured from everyday ingredients.

    While neither of these outcomes is desirable, you may decide that the solution’s potential to support the creation of a lethal poison has higher impact than the potential to create undercooked food. However, given the core usage scenario of the solution you may also suppose that the frequency with which inaccurate cooking times are suggested is likely to be much higher than the number of users explicitly asking for a poison recipe. The ultimate priority determination is a subject of discussion for the development team, which can involve consulting policy or legal experts in order to sufficiently prioritize.

    3: Test and verify the presence of harms

    Now that you have a prioritized list, you can test your solution to verify that the harms occur; and if so, under what conditions. Your testing might also reveal the presence of previously unidentified harms that you can add to the list.

    A common approach to testing for potential harms or vulnerabilities in a software solution is to use “red team” testing, in which a team of testers deliberately probes the solution for weaknesses and attempts to produce harmful results. Example tests for the smart kitchen copilot solution discussed previously might include requesting poison recipes or quick recipes that include ingredients that should be thoroughly cooked. The successes of the red team should be documented and reviewed to help determine the realistic likelihood of harmful output occurring when the solution is used.

     Note

    Red teaming is a strategy that is often used to find security vulnerabilities or other weaknesses that can compromise the integrity of a software solution. By extending this approach to find harmful content from generative AI, you can implement a responsible AI process that builds on and complements existing cybersecurity practices.

    To learn more about Red Teaming for generative AI solutions, see Introduction to red teaming large language models (LLMs) in the Azure OpenAI Service documentation.

    4: Document and share details of harms

    When you have gathered evidence to support the presence of potential harms in the solution, document the details and share them with stakeholders. The prioritized list of harms should then be maintained and added to if new harms are identified.

    https://lernix.com.my/networking-training-courses-malaysia

  • Plan a responsible generative AI solution

    The Microsoft guidance for responsible generative AI is designed to be practical and actionable. It defines a four stage process to develop and implement a plan for responsible AI when using generative models. The four stages in the process are:

    1. Map potential harms that are relevant to your planned solution.
    2. Measure the presence of these harms in the outputs generated by your solution.
    3. Mitigate the harms at multiple layers in your solution to minimize their presence and impact, and ensure transparent communication about potential risks to users.
    4. Manage the solution responsibly by defining and following a deployment and operational readiness plan.

     https://lernix.com.my/it-security-training-courses-malaysia

  • Azure AI Foundry Agent Service

    Azure AI Foundry Agent Service is a service within Azure that you can use to create, test, and manage AI agents. It provides both a visual agent development experience in the Azure AI Foundry portal and a code-first development experience using the Azure AI Foundry SDK.

    Screenshot of the Azure AI Agent playground in the Azure AI Foundry portal.

    Components of an agent

    Agents developed using Foundry Agent Service have the following elements:

    • Model: A deployed generative AI model that enables the agent to reason and generate natural language responses to prompts. You can use common OpenAI models and a selection of models from the Azure AI Foundry model catalog.
    • Knowledge: data sources that enable the agent to ground prompts with contextual data. Potential knowledge sources include Internet search results from Microsoft Bing, an Azure AI Search index, or your own data and documents.
    • Tools: Programmatic functions that enable the agent to automate actions. Built-in tools to access knowledge in Azure AI Search and Bing are provided as well as a code interpreter tool that you can use to generate and run Python code. You can also create custom tools using your own code or Azure Functions.

    Conversations between users and agents take place on a thread, which retains a history of the messages exchanged in the conversation as well as any data assets, such as files, that are generated.

    https://lernix.com.my/software-development-training-courses-malaysia

  • Options for agent development

    There are many ways that developers can create AI agents, including multiple frameworks and SDKs.

    Azure AI Foundry Agent Service

    Azure AI Foundry Agent Service is a managed service in Azure that is designed to provide a framework for creating, managing, and using AI agents within Azure AI Foundry. The service is based on the OpenAI Assistants API but with increased choice of models, data integration, and enterprise security; enabling you to use both the OpenAI SDK and the Azure Foundry SDK to develop agentic solutions.

     Tip

    For more information about Foundry Agent Service, see the Azure AI Foundry Agent Service documentation.

    OpenAI Assistants API

    The OpenAI Assistants API provides a subset of the features in Foundry Agent Service, and can only be used with OpenAI models. In Azure, you can use the Assistants API with Azure OpenAI, though in practice the Foundry Agent Service provides greater flexibility and functionality for agent development on Azure.

     Tip

    For more information about using the OpenAI Assistants API in Azure, see Getting started with Azure OpenAI Assistants.

    Microsoft Agent Framework

    The Microsoft Agent Framework is a lightweight development kit that you can use to build AI agents and orchestrate multi-agent solutions. The framework serves as a platform specifically optimized for creating agents and implementing agentic solution patterns.

    AutoGen

    AutoGen is an open-source framework for developing agents rapidly. It’s useful as a research and ideation tool when experimenting with agents.

     Tip

    For more information about AutoGen, see the AutoGen documentation.

    Microsoft 365 Agents SDK

    Developers can create self-hosted agents for delivery through a wide range of channels by using the Microsoft 365 Agents SDK. Despite the name, agents built using this SDK aren’t limited to Microsoft 365, but can be delivered through channels like Slack or Messenger.

     Tip

    For more information about Microsoft 365 Agents SDK, see the Microsoft 365 Agents SDK documentation.

    Microsoft Copilot Studio

    Microsoft Copilot Studio provides a low-code development environment that “citizen developers” can use to quickly build and deploy agents that integrate with a Microsoft 365 ecosystem or commonly used channels like Slack and Messenger. The visual design interface of Copilot Studio makes it a good choice for building agents when you have little or no professional software development experience.

     Tip

    For more information about Microsoft Copilot Studio, see the Microsoft Copilot Studio documentation.

    Copilot Studio agent builder in Microsoft 365 Copilot

    Business users can use the declarative Copilot Studio agent builder tool in Microsoft 365 Copilot to author basic agents for common tasks. The declarative nature of the tool enables users to create an agent by describing the functionality they need, or they can use an intuitive visual interface to specify options for their agent.

     Tip

    For more information about authoring agents with Copilot Studio agent builder, see the Build agents with Copilot Studio agent builder.

    Choosing an agent development solution

    With such a wide range of available tools and frameworks, it can be challenging to decide which ones to use. Use the following considerations to help you identify the right choices for your scenario:

    • For business users with little or no software development experience, Copilot Studio agent builder in Microsoft 365 Copilot Chat provides a way to create simple declarative agents that automate everyday tasks. This approach can empower users across an organization to benefit from AI agents with minimal impact on IT.
    • If business users have sufficient technical skills to build low-code solutions using Microsoft Power Platform technologies, Copilot Studio enables them to combine those skills with their business domain knowledge and build agent solutions that extend the capabilities of Microsoft 365 Copilot or add agentic functionality to common channels like Microsoft Teams, Slack, or Messenger.
    • When an organization needs more complex extensions to Microsoft 365 Copilot capabilities, professional developers can use the Microsoft 365 Agents SDK to build agents that target the same channels as Copilot Studio.
    • To develop agentic solutions that use Azure back-end services with a wide choice of models, custom storage and search services, and integration with Azure AI services, professional developers should use Foundry Agent Service.
    • Use the Microsoft Agent Framework to develop single, standalone agents or build multi-agent solutions that use different orchestration patterns.

    https://lernix.com.my/qa-testing-training-courses-malaysia

  • What are AI agents?

    AI agents are smart software services that combine generative AI models with contextual data and the ability to automate tasks based on user input and environmental factors that they perceive.

    For example, an organization might build an AI agent to help employees manage expense claims. The agent might use a generative model combined with corporate expenses policy documentation to answer employee questions about what expenses can be claimed and what limits apply. Additionally, the agent could use a programmatic function to automatically submit expense claims for regularly repeated expenses (such as a monthly cellphone bill) or intelligently route expenses to the appropriate approver based on claim amounts.

    An example of the expenses agent scenario is shown in the following diagram.

    Diagram of an expenses agent answering questions and submitting claims.

    The diagram shows the following process:

    1. A user asks the expense agent a question about expenses that can be claimed.
    2. The expenses agent accepts the question as a prompt.
    3. The agent uses a knowledge store containing expenses policy information to ground the prompt.
    4. The grounded prompt is submitted to the agent’s language model to generate a response.
    5. The agent generates an expense claim on behalf of the user and submits it to be processed and generate a check payment.

    In more complex scenarios, organizations can develop multi-agent solutions in which multiple agents coordinate work between them. For example, a travel booking agent could book flights and hotels for employees and automatically submit expense claims with appropriate receipts to the expenses agent, as shown in this diagram:

    Diagram of a travel booking agent working with an expenses agent.

    The diagram shows the following process:

    1. A user provides details of an upcoming trip to a travel booking agent.
    2. The travel booking agent automates the booking of flight tickets and hotel reservations.
    3. The travel booking agent initiates an expense claim for the travel costs through the expense agent.
    4. The expense agent submits the expense claim for processing.

    https://lernix.com.my/virtualization-training-courses-malaysia