Blog

  • Prepare for guided project

    You’ll be using the .NET Editor as your code development environment. You’ll be writing code that uses string and numeric variables, performs calculations, then formats and displays the results to a console.

    Project overview

    You’re developing a Student GPA Calculator that will help calculate students’ overall Grade Point Average. The parameters for your application are:

    • You’re given the student’s name and class information.
    • Each class has a name, the student’s grade, and the number of credit hours for that class.
    • Your application needs to perform basic math operations to calculate the GPA for the given student.
    • Your application needs to output/display the student’s name, class information, and GPA.

    To calculate the GPA:

    • Multiply the grade value for a course by the number of credit hours for that course.
    • Do this for each course, then add these results together.
    • Divide the resulting sum by the total number of credit hours.

    You’re provided with the following sample of a student’s course information and GPA:

    OutputCopy

    Student: Sophia Johnson
    
    Course          Grade   Credit Hours	
    English 101         4       3
    Algebra 101         3       3
    Biology 101         3       4
    Computer Science I  3       4
    Psychology 101      4       3
    
    Final GPA:          3.35
    

    Setup

    Use the following steps to prepare for the Guided project exercises:

    1. Open the .NET Editor coding environment.
    2. Copy and paste the following code into the .NET Editor. These values represent the student’s name and course details.C#Copystring studentName = "Sophia Johnson"; string course1Name = "English 101"; string course2Name = "Algebra 101"; string course3Name = "Biology 101"; string course4Name = "Computer Science I"; string course5Name = "Psychology 101"; int course1Credit = 3; int course2Credit = 3; int course3Credit = 4; int course4Credit = 4; int course5Credit = 3;

    Now you’re ready to begin the Guided project exercises. Good luck!

    https://lernix.com.my/ibm-infosphere-datastage-training-courses-malaysia

  • Complete the challenge to convert Fahrenheit to Celsius

    In this challenge, you’ll write code that will use a formula to convert a temperature from degrees Fahrenheit to Celsius. You’ll print the result in a formatted message to the user.

    Challenge: Calculate Celsius given the current temperature in Fahrenheit

    1. Select all of the code in the .NET Editor, and press Delete or Backspace to delete it.
    2. Enter the following code in the .NET Editor:C#Copyint fahrenheit = 94;
    3. To convert temperatures in degrees Fahrenheit to Celsius, first subtract 32, then multiply by five ninths (5 / 9).
    4. Display the result of the temperature conversion in a formatted messageCombine the variables with literal strings passed into a series of Console.WriteLine() commands to form the complete message.
    5. When you’re finished, the message should resemble the following output:OutputCopyThe temperature is 34.444444444444444444444444447 Celsius.

    https://lernix.com.my/ibm-lotus-notes-domino-datastage-training-courses-malaysia

  • Monitor and optimize over time

    What was important yesterday might not be important today. As you learn more from running workloads in production, expect changes. Your setup, business needs, workflows, and even your team might shift. You might need to tweak how you build and release software. External factors might also change, like the cloud platform, its resources, and your agreements.

    Keep an eye on how changes affect your costs. Check in regularly to see if your return on investment (ROI) is trending in the right direction, and adjust your goals or requirements if needed.

    Example scenario

    Contoso Air provides a baggage tracking solution for airlines. The workload is hosted in Azure and runs on Azure Kubernetes Service (AKS) with Azure Cosmos DB for its database and uses Azure Event Hubs for messaging. The workload is deployed in both the West US and East US regions.

    Track and monitor your spending

    Use a cost tracking system to regularly review how much you’re spending on resources, data, and support. If you have underused resources, think about shutting them down, replacing them, or reworking them to be more efficient.

    Understanding where your money goes is the first step to controlling it. By tagging resources, classifying expenses, and setting up alerts, you can track spending across teams, services, and environments.

    This visibility helps you catch unexpected charges early, support showback or chargeback models, and make smarter decisions about where to cut or invest.

    Contoso’s challenge

    • The workload team has always stayed under budget, so reducing costs hasn’t been a focus.
    • But next year, they’re planning to boost the workload’s reliability, which means higher Azure costs. That could push them over budget, so they’re thinking about asking for a bigger budget to cover it.

    Applying the approach and outcomes

    • Before the team asks for more budget, they decide to take a closer look at their current Azure and support costs to see if there’s any room to save. They dig into the cost breakdowns by resource, resource group, and tags by using their cost tracking system. They find unexpected spending.
    • The team finds some virtual machines (VMs) still running that were used for an old build system that they don’t need anymore. There’s also old data sitting in Azure Storage that could be moved to a cheaper tier. On top of that, they’re paying for a support contract that includes consulting hours, but they haven’t been using them.
    • The team optimizes their Azure costs by deleting the unused VMs and moving the old data to Archive storage. They begin working more closely with their cloud provider to make good use of their consulting services.
    • They add a recurring task to their backlog to regularly review and optimize their workload costs going forward.

    Tune your workload continuously

    Continuously adjust architecture design decisions, resources, code, and workflows based on ROI data.

    Cloud environments evolve, and so should your architecture. Review your metrics, performance, billing, and feature usage regularly. You might find small tweaks that save money and make things run smoother. Even small adjustments can add up to big savings over time.

    Contoso’s challenge

    • Since the team has stayed under budget historically, they haven’t looked at other ways to do things. Instead, most of their planning focused on building new features.
    • But after finding waste during their first cost review, they decided to take a closer look at the rest of their setup to find more ways to optimize.

    Applying the approach and outcomes

    • The team realizes that they’re putting too many resources into low-priority flows. They can scale back on the throughput without disrupting performance. Instead of over-preparing for peak times, they’ll switch to a queue-based load leveling system.
    • They also notice that their compute platform now includes a new feature in their chosen SKU that replaces some of the authentication code. Using this feature means less code to maintain and test.

    Optimize your cloud environment continuously

    Make it a habit to regularly check for unused resources or old data in your cloud setup and remove them. Over time, these components that were once useful can stick around and quietly accrue costs. Keep your environment optimized to help keep things efficient and save money.

    Shutting down resources that you’re not using and deleting data that you don’t need frees up budget for more important work.

    Contoso’s challenge

    • Over the past year, the team created several temporary environments for testing new features and running performance experiments. Many of these environments were never cleaned up.
    • They discovered multiple Event Hubs namespaces and Azure Cosmos DB containers that haven’t received any traffic in months but are still incurring storage and throughput costs.
    • Old baggage tracking data from previous airline partners is still stored in hot-access tiers, even though it’s no longer needed for operations or compliance.
    • The team lacks a regular process for identifying and removing unused resources, so clutter continues to build up unnoticed.

    Applying the approach and outcomes

    • The team sets up a monthly cleanup routine that includes tagging resources with expiration dates and reviewing usage metrics to flag idle services.
    • They decommission unused AKS node pools, delete inactive Event Hubs, and consolidate Azure Cosmos DB containers where possible.
    • For historical baggage data, they implement lifecycle policies to automatically archive or delete data based on age and access patterns.
    • They also review their resource SKUs and downgrade services that are over-provisioned.
    • These actions help them reduce unnecessary spend, improve operational efficiency, and keep their cloud environment clean and manageable.

    https://lernix.com.my/ibm-websphere-training-courses-malaysia

  • Design for rate optimization

    You don’t always need to redesign or renegotiate to save money. Sometimes, you can make better use of what you already have. If you don’t optimize existing resources and operations, you could be wasting money without seeing any real benefit.

    Example scenario

    Contoso’s business intelligence (BI) team hosts a suite of GraphQL APIs so that different departments can access data without touching the databases directly. Over time, they’ve added versioning and now run everything through a single Azure API Management gateway on the Consumption tier.

    Three Azure Kubernetes Service (AKS) clusters are behind the API Management instances:

    • One runs a Windows node pool for APIs written in .NET 4.5.
    • One Linux cluster for the APIs written in Java Spring.
    • One runs a Windows node pool for APIs written in .NET Core on Linux. They inherited this cluster from a prior team.

    These clusters are only used for the APIs and are now all managed by the BI team. It’s not the cleanest setup, but it works, so they’ve left it alone.

    The BI team is a cost center in the business, so they’re looking for ways to optimize its rates to drive down operating costs.

    Combine infrastructure where it makes sense

    Try to run things in the same place, whether it’s resources, workloads, or teams. Use services that help you pack more into less space. Consider any trade-offs, especially around security.

    When you pack more utility into fewer systems, you use less hardware and spend less on managing it all. That means lower costs and less complexity.

    Contoso’s challenge

    • Contoso’s team followed the Microsoft AKS baseline architecture. They run three clusters that each have three system nodes, so nine nodes total.
    • They apply patches and updates to all clusters three times every month.

    Applying the approach and outcomes

    • After the team does testing, they decide to combine all the APIs into a single cluster with three user node pools while achieving the same performance and OS characteristics of their original cluster.
    • They also consolidate to four nodes for their system node pool, saving the costs of five virtual machines.
    • Now they only have one cluster to patch and update, which saves even more time.
    • Next, they’re looking at merging two Linux node pools into one to make things even simpler.

    Take advantage of reservations and other infrastructure discounts

    Optimize by committing and prepurchasing to take advantage of discounts offered on resource types that aren’t expected to change over time and for which costs and utilization are predictable. Also, work with your licensing team to influence future purchase agreement programs and renewals.

    Microsoft offers reduced rates for predictable and long-term commitment to specific resources and resource categories. Resources cost less during the usage period and can be amortized over the period.

    By keeping your licensing team aware of the current and predicted investment by resource, you can help them rightsize commitments when your organization signs the agreement. In some cases, these projections and commitments could influence your organization’s price sheet, which benefits your workload’s cost and also other teams that use the same technology.

    Contoso’s challenge

    • Now that the team has consolidated onto one cluster, removing some of the excess compute and operational burden they previously absorbed, they’re interested in finding additional measures to lower the cost of the cluster.
    • Because the BI team is happy with the AKS platform, they plan on continuing to use it for the foreseeable future, and likely will even grow its usage.

    Applying the approach and outcomes

    • Because AKS is built on top of Azure Virtual Machine Scale Sets, the team looks into Azure reservations. They know the expected SKUs and scale units they need for the user nodes.
    • They purchase a three-year reservation that covers the system node pool and the minimum instance count of nodes per user node pool.
    • With this purchase, the team knows they’re getting the best deal on their compute needs while allowing the workload to grow over time.

    Use fixed-price billing when practical

    Switch to fixed-price billing instead of consumption-based billing for a resource when its utilization is high and predictable and a comparable SKU or billing option is available.

    When utilization is high and predictable, the fixed-price model usually costs less and often supports more features. Using it could increase your ROI.

    Contoso’s challenge

    • The API Management instances are all deployed as Consumption tier SKUs currently. After evaluating the APIs’ usage patterns, they understand that the APIs are used globally and sometimes quite heavily. The team decides to analyze the cost differences between the current billing model and a fixed-price model.

    Applying the approach and outcomes

    • After performing the cost analysis, the team finds that migrating from Consumption to Standard tier will be a bit less expensive overall given the current usage patterns. As the services grow over the next year, the cost differences will likely become more pronounced. Even though the fixed-pricing model doesn’t reflect the elasticity characteristics of the requests, sometimes prepurchased billing models are the right choice.
    • As an added bonus, using the Standard tier allows the use of a private endpoint for inbound connections, which the team has been eager to implement for the workload.
    • In this case, switching SKUs made sense for both utilization purposes and for the added benefit of the additional network segmentation that’s possible with a private endpoint implementation.

    https://lernix.com.my/iot-internet-of-things-training-courses-malaysia

  • Design for usage optimization

    Different services come with different features and price points. After you pick a plan, don’t let those features go to waste. Find ways to use them fully and get your money’s worth. Also, keep an eye on your billing models. It’s smart to check if there’s a better billing model that fits how you’re actually using the service.

    Example scenario

    Contoso University hosts a commercial off-the-shelf (COTS) system that helps faculty manage courses and lets students register. It’s connected to a cloud-based education management system that they plan to fully switch to in a few years. For now, they want to optimize costs on the custom integration parts.

    The technology solution of the COTS offering is generally treated like a black box, except for its database, which runs on Azure Database for MySQL. The custom integration is an Azure durable function that runs fanned out on a Standard Azure App Service plan that used to host the university’s website, but doesn’t anymore. The durable function is a Python app that uses Azure Storage. It syncs data every night from the MySQL database to the cloud-based API.

    Use the full value of your resources

    Buy only what you need, and use everything that you’re paying for.

    Some resource SKUs come with built-in features for performance, security, or reliability. If you’re paying for them, make sure you’re using them. And if you don’t need those features, pick a simpler SKU to save money.

    Contoso’s challenge

    • The durable function runs on a Standard App Service plan that was originally sized for a public website, but that website has since been retired.
    • The team never re-evaluated the SKU, so they’re still paying for features and capacity that they don’t use.
    • They’re unsure which features are actually needed for the integration workload.

    Applying the approach and outcomes

    • The team reviews the current App Service plan and concludes that the integration doesn’t require the same level of scalability or performance and can be supported by a lower-tier configuration.
    • They move the function to a lower-tier plan that still supports durable functions but costs much less.
    • They also check their MySQL SKU and confirm that it’s rightsized for the current workload.
    • These changes help them reduce costs without affecting performance or reliability.

    Optimize your high availability design

    Prioritize deployment of active-active or active-only over active-passive models, as part of your recovery plan, if you already paid for the resources.

    If your design defaults to using active-passive models, you might have idle resources that could otherwise be used. Converting to active-active might enable you to meet your load leveling and scale bursting requirements without overspending. If you can meet your recovery targets with an active-only model, the costs of those resources can be removed completely.

    Contoso’s challenge

    • The COTS application uses Azure Database for MySQL Flexible Server configured for same-zone high availability, which provides a standby server in the same availability zone as the primary server. They also have enabled automatic backups.
    • The workload’s recovery point objective (RPO) is relatively long at 12 hours, and the recovery time objective (RTO) is three hours during the school day.
    • Based on previous recovery tests, the team knows that they can meet their RPO and RTO targets through automatic failover to the standby server. They have also tested recovering the database from a backup and they can meet the targets in this scenario.

    Applying the approach and outcomes

    • The workload team reevaluates the benefit of the high availability design versus the cost of the service being twice as much as a single instance.
    • The team tests building a new instance and recovering a database from backup and they’re satisfied that they will still be in compliance with their recovery targets, so they decide to eliminate the standby instance.
    • The team updates the disaster recovery plan to reflect the new recovery strategy and realize the cost savings through the new configuration.

    https://lernix.com.my/isaca-certification-training-courses-malaysia

  • Design with a cost-efficiency mindset

    Every architectural decision affects your budget, such as whether you build or buy, what tools you use, or how you license and train. It’s important to weigh those options and make trade-offs that still meet your app’s needs without overspending.

    Example scenario

    Contoso Manufacturing runs a custom-built warehouse management system (WMS) that handles its four warehouses across South America. They want to update and move the WMS to the cloud. They’re deciding between a lift-and-shift move of the current solution or a green field build with modern cloud tools. Leadership wants to keep costs under control, so the team needs a plan that maintains cost efficiency.

    The WMS solution is a .NET application that runs on Internet Information Services (IIS) and uses SQL Server for its databases.

    Understand the full cost of your design

    Measure the total cost incurred by technology and automation choices, taking into account the impact on return on investment (ROI). The design must work within the acceptable boundaries for all functional and nonfunctional requirements. The design must also be flexible to accommodate predicted evolution. Factor in the cost of acquisition, training, and change management.

    Implementing a balanced approach that takes ROI into account prevents over-engineering, which might increase costs.

    Contoso’s challenge

    • The engineering team at Contoso is excited to move their warehouse system to the cloud, just like other teams have done.
    • They know the current app has some technical debt, so they’re planning to rewrite much of the application code and switch to newer cloud-native tools.
    • The engineering team wants to redesign everything into microservices and run it on Azure Kubernetes Service (AKS), which is a new and exciting platform for them.

    Applying the approach and outcomes

    • The team is excited about doing a significant redesign during the cloud move, but they know that they need to maintain the workload’s ROI. So they must stick with tools that they already know and avoid major rewrites that require extra engineering team training.
    • The workload team takes a practical approach to designing the system. They want it to be cost-effective, meet expectations, and avoid overcomplicating things. To keep the ROI in check and make the migration smooth, they decide to go with an equivalent solution in the cloud, such as Azure App Service.
    • They establish a cost baseline that accounts for infrastructure, licensing, and operational costs, as well as less obvious factors like training for new platforms, rewriting legacy code, and managing change across teams. They gain a clearer picture of what’s feasible within their budget, which confirms their decision of App Service as the more familiar, lower-risk path.
    • During the migration, the team plans to clean up some of the technical debt that makes sense to tackle now. That way, after everything’s running on Azure, they’ll be in a better spot to keep improving the platform while still keeping the ROI in mind when making those choices.

    Refine the design

    Fine-tune the design by prioritizing services that can reduce the overall cost, don’t need additional investment, or don’t have a significant impact on functionality. Prioritization should account for the business model and technology choices that bring high ROI.

    You can explore cheaper options that might enable resource flexibility or dynamic scaling, or you might justify the use of existing investments. The prioritization parameters might factor in costs that are required for critical workloads, runtime, and operations, and other costs that might help the team work more efficiently.

    Contoso’s challenge

    • The existing workload is hosted on a hyper-converged (HCI) appliance and the team’s cost center is charged back for compute, network, and storage costs.
    • The workload has deployed the preproduction and production environments on Windows virtual machines.
    • GitHub Actions with self-hosted runners is used for running GitHub Actions jobs.

    Applying the approach and outcomes

    • After evaluating several cloud-native options, the team decides that moving the web components to App Service would provide Windows IIS application compatibility without significant changes and wouldn’t require significant training.
    • The team decides to continue using GitHub Actions with self-hosted runners, but they’ll migrate to a virtual machine scale set with the ability to scale to zero nodes when they aren’t being used.

    Design your architecture to support cost guardrails

    Set up cost limits in your architecture to keep spending within a safe range, and ensure that your cloud environment costs are kept under those limits.

    Enforcing limits helps avoid surprise charges and ensures that you only use what you actually budget for.

    Contoso’s challenge

    • The current system doesn’t have cost guardrails, but since it rarely changes, no one’s pushed to add them.
    • The HCI environment owners have set a resource cap, so the workload can’t use more compute or storage than allowed.
    • The team’s worried that moving to the cloud could lead to unexpected costs, and they’re not sure how to avoid that.

    Applying the approach and outcomes

    • The team learns how to use Microsoft Cost Management solutions.
    • They plan to set scale limits for the App Service plans.
    • They plan to set up a deny policy to block certain expensive virtual machine SKUs from being used.
    • They plan to add automation to save on storage. Older or less-used data will automatically move to cheaper storage tiers like cold or archive. This kind of automation wasn’t possible in their old HCI environment.

    https://lernix.com.my/iso-iec-20000-certification-training-courses-malaysia

  • Develop cost-management discipline

    Help your team get comfortable thinking about budgets, spending, and tracking costs. Cost optimization happens at different levels of the organization. So it’s important to understand how your workload fits into the bigger picture and supports company goals and FinOps practices. Having visibility into how resources are organized and how financial policies are applied helps you manage your workload in a consistent, efficient way.

    Example scenario

    Contoso organizes and hosts trade shows. They want to improve how they sell tickets and decide to build a mobile app in-house. The following scenarios walk through how they go from idea to launch, with a focus on making smart cost decisions along the way. The web app is written in .NET, hosted on Azure App Service, and uses Azure SQL Database for its database.

    Develop a cost model

    Before you can track spending properly, you must build a basic cost model.

    A cost model gives you a clearer picture of what things might cost, like infrastructure, support, and setup. It also helps you identify what’s driving those costs early on to estimate how changes in usage could affect your budget and revenue over time.

    Contoso’s challenge

    • Contoso wants to build a mobile app to handle ticket sales for their trade shows, but they’re not sure what it’ll cost, especially because demand can spike.
    • They plan to start small and grow, but without a cost model, it’s tough to get funding or plan ahead.

    Applying the approach and outcomes

    • The team maps out different cost scenarios based on the resources they’d need and how usage might grow. They explore a few setups that could handle different traffic levels to get a sense of what their Azure costs might be now and later on.
    • They combine rough estimates for infrastructure, team costs, and expected revenue to build a starting model.
    • This model helps them predict costs over time as usage increases and gives them a tool that they can keep refining as they make more decisions.

    Set a realistic budget

    Make sure your budget covers everything that you have to include, like key features, support, training, and room to grow.

    After you set a budget, you can set spending limits and get alerts if you’re about to go over budget for a specific resource or the whole project.

    Contoso’s challenge

    • In this scenario, the app is in the design phase and Contoso picked out the basic resources that they need.
    • Contoso needs to figure out their budget for the mobile ticketing workload.
    • Without a solid budget, they risk running out of money, wasting it on things that they don’t need yet, delaying the project timeline, or even putting the entire workload at risk.

    Applying the approach and outcomes

    • As the team refines their cost model, they come up with a confident budget that they can share with stakeholders.
    • This budget gives their architect a clear financial target to design around. As more is learned about the implementation and the operations necessary, the workload team expects to need to renegotiate budget a bit so they leave a small buffer.
    • The goal is to stay flexible but stick to the budget as closely as possible.

    Encourage upstream communication

    Encourage upstream communication from architects to application owners.

    When your organization makes budget adjustments, real-world learnings from production feedback are just as important as the numbers.

    Contoso’s challenge

    • Contoso’s mobile ticketing app is live and working well.
    • After reviewing how it’s being used, the team realizes that it could be more cost-efficient.
    • Since project management and finance seem happy with the results, they’re unsure if it’s worth bringing up.

    Applying the approach and outcomes

    • The team is encouraged to treat the budget like it’s their own and speak up to product management when they see a better way to meet the app’s needs without sacrificing security, reliability, or performance.
    • The workload team shares their ideas with stakeholders, and they talk about the pros and cons of making changes.
    • The changes are approved, and the savings follow.

    https://lernix.com.my/istqb-software-testing-certification-training-courses-malaysia

  • Safe and responsible AI for the public sector

    In the video, Elizabeth Emanuel, Senior Corporate Counsel for Worldwide Public Sector at Microsoft, explains the importance of meeting legal compliance when public sector organizations adopt generative AI policies. As Elizabeth says: “Public sector organizations have a high bar of legal compliance that must be met to ensure the use of AI reflects the values and priorities required for public trust.”

    Public sector organizations should create and adhere to conscientious AI strategies and integrate these approaches into guiding principles, operational practices, tools, and governance.

    This process might involve:

    • Developing new or adopting existing policies and guidelines
    • Providing training for staff to ensure they’re aware of the considerations associated with AI
    • Ensuring that data used to train AI models is representative and assessed for bias
    • Establish governance bodies to subject sensitive use cases to particularly high scrutiny and use tooling and telemetry to ensure that they’re functioning as intended and not causing unintended harm
    • Ensuring accountability for both the development and operation of AI capabilities and AI-enabled systems

    One tool that helps public sector organizations implement AI responsibly is the Azure AI Content Safety Studio. Content Safety Studio uses AI to create safer online spaces by classifying harmful content into four categories:

    • Hate
    • Sexual
    • Self-harm
    • Violence

    The AI models detect these types of content and assign a severity score. Based on the severity score, content is surfaced and actions are assigned.

    Public sector organizations using Content Safety Studio can prioritize what content moderators review with end goals of:

    • Managing and analyzing user-generated content
    • Ensuring compliance with guidelines
    • Maintaining a safe online environment

    Responsible AI should be ingrained as standard practice—not an afterthought—for public sector organizations. By doing so, they can use AI effectively to enhance services and ultimately help society.

    https://lernix.com.my/itil-certification-training-courses-malaysia

  • Accelerate discovery with generative AI

    Generative AI can benefit public sector organizations by helping them quickly understand and stimulate complex situations and processes, especially in the following areas:

    • Helping public sector agencies overcome cybersecurity challenges.
    • Performing predicative modeling, forecasting and stimulating.
    • Advancing scientific discoveries.

    Let’s explore how public sector agencies can use generative AI to accelerate the discovery of cyber threats and improve their security posture.

    Improve security posture

    Public sector organizations are often targets for cyberattacks thanks to the amount of sensitive and classified information many of them handle daily. A strong cybersecurity posture is critical to national security. Generative AI can help agencies overcome some of the many security challenges they face.

    In the following video, Sara Nagy, Senior Director of Customer Engagement at Microsoft, explains four ways in which public sector organizations can use generative AI to help improve their security posture.

    https://learn-video.azurefd.net/vod/player?id=2075b5ae-5748-4e23-b5ba-5576ad643481&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fenhance-public-sector-services-generative-ai%2Faccelerate-discovery-generative-ai

    When public sector organizations adopt this comprehensive approach to threat management, they can swiftly adapt to evolving security challenges. They also empower security professionals of all expertise levels with valuable insights that can help them perform their roles better.

    https://lernix.com.my/java-ee-enterprise-edition-training-courses-malaysia

  • Augment cognition with generative AI

    Generative AI can benefit public sector organizations by increasing comprehension and learning and augmenting cognition, especially in the following areas:

    • Helping fraud investigators find evidence by extracting insights from data
    • Performing multimodal image analysis by gathering insights quickly from open-source intelligence with AI processing
    • Creating knowledge hubs to organize repositories, surface insights, and empower teams to find information more efficiently

    Augmented cognition through AI provides a digital sidekick that helps employees think better and handle complex tasks easily.

    Fraud investigations

    Fraud investigators already use sophisticated tools and lead in AI adoption. However, fraud investigators face considerable challenges because today’s models have limitations, like the ability to consider unstructured, non-numerical data or lack of adaptability due to the need for frequent manual updates.

    In the following video, Sara Nagy, Senior Director of Customer Engagement at Microsoft, explains how generative AI offers several unique capabilities that complement traditional solutions for public sector fraud investigators.

    https://learn-video.azurefd.net/vod/player?id=14861f33-76ee-4b11-83e5-0e3c192aa4d7&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fenhance-public-sector-services-generative-ai%2Faugment-cognition-generative-ai

    Multimodal image analysis

    Generative AI can help public sector organizations unlock value through analysis of images like security camera footage, optical sensors, handheld devices, and other image-capturing technology.

    Understanding complex visual data can be difficult due to the following potential situations:

    • Images from a single modality might not provide sufficient information for accurate analysis
    • Images acquired under different conditions (for example, lighting, resolution, or viewpoint) can introduce variability
    • A single imaging modality might not be able to capture certain features or aspects of interest
    • Images in different formats or from different modalities can require unique and costly processing methods
    • Explore the tools needed to support this scenario: Multimodal image analysis with Azure OpenAI Service (PDF)
    • Reference architecture: Multimodal image analysis (PDF)

    Create knowledge hubs

    When public sector employees are accessing internal resources and information, they need to be able to find relevant information easily. When they can’t find the information they’re looking for, it can be frustrating. This frustration in turn places an added burden on internal staff, like HR, to respond to requests that could be self-serviced.

    Other challenges employees experience might include:

    • Inconsistent document management and organization practices
    • Lack of integration between different information repositories or the use of legacy technologies
    • Limited training for employees on available resources and processes
    • Discrepancies in document formats, naming conventions, and indexing

    Generative AI can help organize knowledge hubs by letting users search for information using natural language and then providing quick responses and access to data. This capability helps increase satisfaction in the workplace and frees up employees’ time with a repository of easily accessible information

    https://lernix.com.my/dynamics-365-marketing-training-courses-malaysia