Blog

  • Establish AI-related roles and responsibilities

    Enabling AI in your organization is a collective responsibility

    Everyone has a role to play in AI transformation, not just IT. It’s important to empower people from all functions across your company to actively contribute ideas about AI applications. It’s key to foster collaboration between business and technical teams when planning design and implementation. After deployment, teams across the technical and operational sides of the business need to be involved in maintaining AI solutions over time:

    • Measuring business performance and ROI from the AI solution.
    • Monitoring model performance and accuracy.
    • Acting on insights gained from an AI solution.
    • Addressing issues that arise and deciding how to improve the solution over time.
    • Collecting and evaluating feedback from AI users (whether they’re customers or employees).
    Diagram that shows that AI requires multidisciplinary skills: domain understanding, IT skills, and AI skills.

    It’s the ultimate responsibility of the senior executive leadership team to own the overall AI strategy and investment decisions, creating an AI-ready culture, change management, and responsible AI policies.

    As for the other leaders across an organization, there’s no single model to follow, but different roles can play a part. Your organization needs to determine a model that’s suited to your strategy and objectives, the teams within your business, and your AI maturity.

    Line of business leader

    Photograph of a person who is a business leader standing in front of a building.

    This person is a business executive responsible for operations of a particular function, line of business, or process within an organization.

    • Source ideas from all employees: People from every department and level should feel free to contribute ideas, ask questions, and make suggestions related to AI. We’ve discovered that ideas for our most impactful application of AI have come from our employees within business functions, not from outside or above.
    • Identify new business models: The real value of AI lies in business transformation: driving new business models, enabling innovative services, creating new revenue streams, and more.
    • Create optional communities for exchanging ideas: They provide opportunities for IT and business roles to connect on an ongoing basis. You can implement this measure virtually through tools such as Yammer, or in-person at networking events or lunch-and-learn sessions.
    • Train business experts to become Agile Product Owners: A Product Owner is a member of the Agile team responsible for defining the features of the application and streamlining execution. Including this role as part or all of a business expert’s responsibilities allow them to dedicate time and effort to AI initiatives.

    Chief Digital Officer

    Photograph of a person who is a Chief Digital Officer.

    The Chief Digital Officer (CDO) is a change agent who oversees the transformation of traditional operations using digital processes. Their goal is to generate new business opportunities, revenue streams, and customer services.

    • Cultivate a culture of data sharing across the company: Most organizations generate, store, and use data in a siloed manner. While each department may have a good view of their own data, they may lack other information that could be relevant to their operations. Sharing data is key to efficiently using AI.
    • Create your AI manifesto: This is the ‘north star’ that clearly outlines the organization’s vision for AI and digital transformation more broadly. Its goal isn’t only to solidify the company’s strategy, but to inspire everyone across the organization and help them understand what the transformation means for them. The CDO needs to work with other members of senior executive leadership team to create the document and message it to the company.
    • Identify catalyst projects for quick wins: Kick-start AI transformation by identifying work that can immediately benefit from AI, that is, H1 initiatives. Then, showcase those projects to prove its value and gain momentum among other teams (H2 and H3).
    • Roll out an education program on data management best practices: As more people outside of IT become involved in using or creating AI models, it’s important to make sure everyone understands data management best practices. Data needs to be cleaned, consolidated, formatted, and managed so that it’s easily consumable by AI and can avoid biases.

    Human Resources leader

    Photograph of a person who is a human resources leader.

    A Human Resources (HR) director makes fundamental contributions to an organization’s culture and people development. Their wide-ranging tasks include implementing cultural development, creating internal training programs, and hiring according to the needs of the business.

    • Foster a “learning culture”: Consider how to encourage a culture championed by leadership that embraces challenges and acknowledges failure as a valuable part of continual learning and innovation.
    • Design a “digital leadership” strategy: Make a plan to help line of business leaders and the senior executive leadership team build their own AI literacy and lead teams through AI adoption. Keep in mind that any AI strategy should comply with responsible AI principles.
    • Create a hiring plan for new roles such as data scientists: While upskilling your employees is the long-term goal, in the short-term you may need to hire some new roles specifically for AI initiatives. New roles that may be required include data scientists, software engineers, and DevOps managers.
    • Create a skills plan for roles impacted by AI: Creating an AI-ready culture requires a sustained commitment from leadership to educate and upskill employees on both the technical and business sides.
      • On the technical side, employees need core skills in building and operationalizing AI applications. It can be helpful to partner with other companies to get your teams up to speed, but AI solutions are never static. They require constant adjustments to exploit new data, new methods, and new opportunities by people who also have an intimate understanding of the business.
      • On the business side, it’s important to train people to adopt new processes when an AI-based system changes their day-to-day workflow. Training includes teaching them how to interpret and act on AI predictions and recommendations using sound human judgment. You should manage that change thoughtfully.

    IT leader

    Photograph of a person who is an IT leader.

    While the Chief Digital Officer is charged with creating and implementing the overall digital strategy, an IT director oversees the day-to-day technology operations.

    • Launch Agile working initiatives between business and IT: Implementing Agile processes between business and IT teams can help keep those teams aligned around a common goal. Implementation requires a cultural shift to facilitate collaboration and reduce turf wars. Tools such as Microsoft Teams and Skype are effective collaboration tools.
    • Create a “dark data” remediation plan: Dark data is unstructured, untagged, and siloed data that organizations fail to analyze. It isn’t classified, protected, or governed. Across industries, companies stand to benefit greatly if they can bring dark data into the light. To do so, they need a plan to remove data siloes, extract structured information from unstructured content, and clean out unnecessary data.
    • Set up agile cross-functional delivery teams and projects: Cross-functional delivery teams are crucial to running successful AI projects. People with intimate knowledge of and control over business goals and processes should be a central part of planning and maintaining AI solutions. Data scientists working in isolation might create models that lack the context, purpose, or value that would make them effective.
    • Scale MLOps across the company: Managing the entire machine learning lifecycle at scale is complicated. Organizations need an approach that brings the agility of DevOps to the machine learning lifecycle. We call this approach MLOps: the practice of collaboration between data scientists, AI engineers, app developers, and other IT teams to manage the end-to-end machine learning lifecycle. Learn more about MLOps in the corresponding units of the module “Leverage AI tools and resources for your business.”

    The function of business workers isn’t just to deliver insights to data scientists. AI must help them work better and faster. In the next unit, let’s see how this goal can be achieved with no-code tools that don’t require data science expertise or mediation.

    https://lernix.com.my/citrix-certification-malaysia-2

  • Apply a horizon-based framework

    Map initiatives to a prioritization grid

    Start with a matrix with four quadrants that organizes planned initiatives by strategic impact on one axis and business model impact on the other.

    The matrix’s horizontal axis represents a spectrum of “tactical” to “strategic” initiatives. “Tactical” initiatives are confined to a single team or use case. “Strategic” initiatives represent larger investments that might affect the entire organization. The matrix’s vertical axis represents a spectrum of business models. Existing business model initiatives address competitive and disruptive threats, improve operations, or empower employees. New business model initiatives create new value propositions and revenue streams.

    As you map initiatives, it’s helpful to involve the Chief Financial Officer (CFO) office and other stakeholders to ensure you’ve made the right assumptions around the opportunity valuation.

    Let’s try filling in the prioritization grid using the earlier manufacturing example. You might place automation of quality control in the lower left quadrant. It’s an initiative that digitizes and optimizes an existing business model without requiring systemic changes.

    Scenarios that fall below the middle line help the organization survive more than thrive. They might address competitive and disruptive threats, improve operations, or empower employees in the organization. Scenarios above the middle line help companies create new value propositions, revenue streams, or business models.

    Once you are done classifying your initiatives on the grid, you can map the quadrants to horizons. The quadrant that an initiative fits determines which horizon it belongs to. The initiatives in quadrants one and four belong to Horizon 2. The initiatives in quadrant three belong to Horizon 1. The initiatives in quadrant two belong to Horizon 3.

    Diagram that shows a filled in prioritization grid.

    Prioritize investments based on horizons

    We recommend prioritizing initiatives in phases: start with foundational initiatives in the bottom left of the Prioritization framework quadrant and move towards transformational initiatives in the top right of the quadrant.

    Having mapped the initiatives to their horizons, you tackle them in order: Horizon 1 initiatives first then Horizon 2 initiatives, and finally Horizon 3 initiatives.

    We recommend this approach because it’s helpful to grow capabilities and get buy-in before you move to more complex projects. Begin by forming technical teams that can prepare data appropriately and familiarize themselves with AI models. Starting with foundational initiatives also helps establish trust across the business and manage expectations related to AI initiatives. The success and value you’re able to demonstrate in early initiatives pave the way for the more transformational projects.

    Another reason to start at the bottom left of the prioritization framework quadrant is that the technology used to support H1 initiatives is typically more accessible than advanced use cases. There are countless out-of-the-box AI models you can apply to common use cases. These applications cost less and their effect on the business is easier to estimate. As you build maturity with these accessible models, you can experiment with more complex AI initiatives and hone your objectives.

    Diagram that shows the prioritization framework. It moves from incremental to aspirational AI initiatives.

    Horizon 2 and Horizon 3 initiatives require more sophisticated data science capabilities, which may result in unintended or unexpected outcomes. These initiatives often require businesses to work with partners to create a custom model that can’t be bought off the shelf. These solutions require the most resources, time, and risk, but they offer the greatest reward. Achieving a lasting competitive advantage requires solutions that aren’t easily duplicated.

    Define clear value drivers and KPIs for your AI investments

    Once you’ve chosen AI initiatives, it’s important to identify value drivers and key performance indicators (KPIs) for each project. The framework provides a useful way to think about any investment including AI initiatives.

    ValueSample categoryDefinitionAI example
    Financial driversSalesThe revenue earned from products or services.Use targeted marketing to improve accuracy in classifying prospects.
    Financial driversCost managementProcess of planning and controlling the budget of a business. In addition to employee time and effort, the costs of AI models include cloud compute, which varies depending on the model’s workload.Improve prediction models for scheduling equipment maintenance to improve sustainability.
    Financial driversCapital productivityMeasure of how physical capital is used in providing goods and services.Enhance employee productivity and resource allocation with insight into operations.
    Quality measuresQualityThe degree to which products or services meet customer or business expectations.Improve product quality with automated inspection processes.
    Quality measuresCycle timesThe time it takes to complete a process.Accelerate product inspections with image recognition.
    Quality measuresSatisfaction (customer and/or employee)How happy customers are with a company’s products or services (which contributes to market share, competitive differentiation, and more).Improve customer engagement with personalized discounts and product bundles.

    As you invest in initiatives, it’s important to develop market and financial models to help balance potential risk and return. Consider factors such as the total addressable market (TAM), net present value (NPV), and internal rate of return (IRR). Work with the CFO office and other key stakeholders to ensure the financial models make sense within the context of the business. These metrics can help secure their buy-in and ensure support throughout the process.

    Moving forward, we advise putting systemic processes in place to manage and evaluate value throughout the project lifecycle. We recommend taking an agile approach that happens in stages—after you invest in an initiative, evaluate the initial results. Then you can determine whether to continue, adjust your approach, or take another path. Continue to evaluate value at major milestones throughout the project.

     Tip

    Take a moment to come up with some potential example investments for each of the 3 horizons. Photograph showing people working and talking around a table.

    https://lernix.com.my/comptia-certification-malaysia

  • Evaluate and prioritize AI investments

    Adopting AI throughout an organization implies a serious investment. However, investing in AI projects requires a different perspective than most investments. If you use AI to improve or automate an existing process, then it’s possible to measure return on investment (ROI) in the straightforward, traditional way. But there are a few characteristics of AI initiatives that make it difficult to estimate their costs and benefits.

    First, most AI models require upfront investment before it’s even possible to measure effectiveness. It’s hard to predict the accuracy of the model and its business impact until you’ve prepared data and completed model training and testing. Additionally, it’s hard to predict the amount of long-term maintenance a model needs. Individual models improve over time in ways that are difficult to calculate in advance.

    With AI initiatives, you need to think like a venture capitalist. That means being willing to invest and take risks amid uncertainties. But you don’t have to guess. Instead, you can use a framework to help prioritize AI investments.

    What is Microsoft’s horizon-based framework?

    At Microsoft, we use a horizon-based framework to evaluate and prioritize AI investments. The horizon framework is a way to break development initiatives into phases called “horizons”. AI initiatives are three horizons, from improving core business functions to creating brand new revenue streams. The risk and uncertainty of specific applications depends on a company’s level of AI maturity, size, business objectives, and more.

    Diagram that shows the horizon framework, increasing both risk and uncertainty and disruptive potential from H1 to H3.

    Horizon 1: Running (operate and optimize the core business)

    Not every AI application involves revolutionary changes. In fact, using AI to improve or automate existing processes is becoming essential to remaining competitive. Horizon 1 (H1) represents AI initiatives that optimize core business functions.

    For example, perhaps you manufacture electronic components. While you might manually inspect quality for 100 parts per hour, an AI model with image recognition capabilities could inspect 1,000 parts per hour.

    Horizon 2: Growing (improve market position)

    Horizon 2 (H2) initiatives take advantage of emerging opportunities. These initiatives might create new services or new customer experiences.

    For example, a manufacturer of electronics might use IoT to collect operational data and AI to suggest optimal times for maintenance. These initiatives facilitate a brand-new customer experience and help the manufacturer differentiate from competitors.

    Horizon 3: Transforming (change market position)

    Horizon 3 (H3) involves disruptive and innovative new business models. These are revolutionary applications that might cross industry boundaries or even create new customer needs.

    For example, the same electronics manufacturer could sell “electronics-as-a-service” which means they use AI models to predict which electronic devices work best for your current system and needs. Ultimately, the company is selling a personalized service rather than a single product, creating new revenue streams and opportunities.

    Next, let’s take a look at how to use a prioritization grid to apply a horizon-based framework.

    https://lernix.com.my/dell-emc-certification-malaysia

  • Discover the path to AI success

    After learning the basics of an AI-centric organization, it’s important to understand that AI adoption is a journey. In their collaboration and discussion with business leaders, Microsoft is discovering insights on how organizations can achieve AI success.

     Note

    For this purpose, Microsoft has developed a leader’s guide to build a foundation for AI success.

    This model is based on five pillars that drive organizations to AI success:

    • Business strategy.
    • Technology strategy.
    • AI strategy and experience.
    • Organization and culture.
    • AI governance.

    In the following video, Jessica Hawk, Corporate Vice President of Azure Data, AI, and Digital Applications and Innovation Product Marketing, explains in detail this model and its five pillars.

    https://learn-video.azurefd.net/vod/player?id=e9c94a64-1dd4-4bc7-ad37-2c892d5fcded&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fcreate-business-value%2F4-discover-ai-success

    This whitepaper includes a model of the stages of AI success. This five-tiered chart is a tool to help organizations take AI to the next level and evaluates their AI maturity.

    1. Exploring stage: Companies at this initial stage of AI adoption are just starting their AI journey yet. They’re still learning about AI and experimenting with it in some parts of the organization.
    2. Planning stage: Organizations at this stage are actively assessing, defining, and planning an AI strategy across the company.
    3. Formalizing stage: At this point, companies are formalizing, socializing, and executing on AI strategy across the organization. These AI initiatives take place in multiple business units. AI is starting to generate value.
    4. Scaling stage: Organizations are now in position to think bigger. AI initiatives deliver both incremental and new value across the company.
    5. Realizing stage: At this final stage, AI achieves consistent AI value across the organization and in multiple business units.
    Photograph showing the stages of AI success: exploring, planning, formalizing, scaling, and realizing.

    However, enormous AI acceleration took place in the last few years. Great breakthroughs in generative AI and prebuilt models, such as the large language models (LLM) offered by Azure AI Studio, are disrupting the field. This new context has two major implications:

    • Need to be up to date: Now, even mature companies need to reinvent themselves and adopt new waves of AI to avoid losing their competitive edge. Their AI strategy must reflect and leverage the impact brought by recent technologies.
    • Mainstream AI: Generative AI is changing the rules of AI adoption by empowering business users at an unprecedented level. It might be easier than ever to implement AI in business. Many companies are working hard to rank higher in the maturity assessment model.

    https://lernix.com.my/google-cloud-certification-malaysia

  • Discover the characteristics that foster an AI-ready culture

    A successful AI strategy must consider cultural issues as well as business issues. Becoming an AI-ready organization requires a fundamental transformation in how you do things, how employees relate to each other, what skills they have, and what processes and principles guide your behaviors. This transformation goes to the core of an organization’s culture, and it’s vital for organizations to tackle such transformation with a holistic approach. Leaders should back this cultural change for everyone at the organization to embrace and adopt AI.

    Fostering an AI-ready culture requires:

    • Being a data-driven organization.
    • Empowering people to participate in the AI transformation, and creating an inclusive environment that allows cross-functional, multidisciplinary collaboration.
    • Creating a responsible approach to AI that addresses the challenging questions AI presents.

    Of course, this is only possible with strong leadership that drives change by both adopting the changes this transformation will require and actively supporting people throughout. Below we share our perspective on the changes you need to make to achieve an AI-ready culture.

    Data-driven

    Photograph showing sharing data across your organization and adopting rigorous data practices.

    Any good AI system relies on having the best and most complete data and being able to reason over your entire data estate. In other words, it depends on a matter of integrity and access.

    Access

    Due to data ownership or storage issues, most organizations generate, organize, and use data in a siloed manner. While each department may have a good view of the data coming from their own processes, they may lack other information that could be relevant to their operations.

    For instance, a sales department might not have a complete view of a customer, because they’re missing pieces of data, like e-commerce activity and payment status, which are controlled by other departments. In this case, a seller may make the mistake of trying to sell a customer an insurance policy that they already purchased through an online channel.

    By sharing data across the organization, the sum becomes greater than the parts. It’s no longer each piece of data that matters, but what that data adds up to: a unified view of the customer. With that unified view, you can make better decisions, act more effectively, and provide a better customer experience. Your data estate must be accessible to be useful, whether it’s on-premises, in the cloud, or on the edge.

    Integrity

    The quality of the data is also key. In this example, if the customer data was riddled with errors, like inaccurate contact information, irrelevant data, or duplication, it wouldn’t matter that the data had been unified; the seller could still make significant mistakes in interacting with the customer.

    Just as quality of data is key to creating next-level experiences for customers, it’s also key to successful AI. An AI model is only as good and complete as the data it can operate on and learn from. So, it’s of paramount importance to work in a way that ensures your data is as complete and rigorous as possible.

    In summary, becoming data-driven means acquiring a mindset of data sharing and rigorousness that drives how you work and relate, and ultimately how you collaborate. This enables you to realize the value of AI and better confront the challenges that AI brings.

    Empowered

    Photograph showing three construction workers representing empowerment and inclusivity: providing resources, a collaborative culture, and focusing on business needs.

    Fostering an AI-ready culture means empowering people to be part of the AI transformation. Organizations should provide the following opportunities to achieve this goal:

    • Enablement: Space, resources, guidance, security, and support is needed to improve what people do with AI.
    • Time for learning: Organizations should help people get the knowledge and the skills.
    • Room for experimentation: During this process, you should encourage new ideas and continuous improvement. This experimentation must allow room for errors, as well as celebration and acknowledgment of success.

    It also means to create an inclusive environment, one that is predicated on the willingness and ability of employees to work in cross-functional teams that cut across organizational boundaries.

    Furthermore, it means to make those who best understand the business a central piece of your transformation process. Data scientists working in isolation often create models that lack the business knowledge, purpose, or value that would make them an effective AI resource. Similarly, business people working in isolation lack the technical knowledge to understand what can be done from a data science perspective. A multidisciplinary approach is important.

    By enabling cross-functional teams that include both data scientists and the business employees closest to the business need, you can create powerful and effective AI solutions. An example of this is our hugely successful compliance predictive analytics tools, which were inspired and developed by employees working on our finance teams. They were successful only because they were created with the insights of those closest to the business need. This example illustrates how powerful it’s to create an inclusive, cross-organizational collaborative approach.

    Responsible

    Photograph showing person stamping a paper to represent a culture of review and responsible AI.

    The third key element of an AI-ready culture is fostering a responsible approach to AI. As AI continues to evolve, it has the potential to drive considerable changes to our lives, raising complex and challenging questions about what future we want to see.

    Like a Corporate Vice President of Strategic Missions and Technologies at Microsoft says: the question very often is not what AI can do, it’s what AI should do. Organizations need to ask themselves: How do we design, build, and use AI systems to create a positive impact on individuals and society? How can we ensure that AI systems treat everyone fairly? How can we best prepare the workforce for the new AI era?

    These questions demand for organizations to think about their AI principles and how to ensure them throughout the company. To ensure responsible AI practices, specific planning is required that should include an AI governance model. In this way, you can deliver transparent, explainable, and ethical AI. The module Embrace responsible AI principles and practices provides a more detailed discussion of the implications of responsible AI for business.

    https://lernix.com.my/isaca-certification-malaysia

  • Create business value with an AI strategy

    There’s excitement stirring around AI. It’s now clear that AI technologies drive substantial value to organizations and should be embraced to keep a competitive edge. However, the complexity underpinning AI may feel intimidating. Any organization needs a solid plan for AI adoption and scaling to fully benefit from AI’s potential. You should consider AI as a tool to reach your business goals and incorporate it into the corporate strategy.

    In Microsoft, we recommend using a holistic framework for AI strategy. This framework applies to all organizations, and provides a sensible approach to AI implementation. This AI strategy framework covers three elements: the external environment that gives you context, the value proposition that you offer to customers, and the executive capabilities of your organization.

    External environment

    Your starting point should be to understand the external industry environment. Right now, it involves measuring how AI is impacting your sector. This technology is shifting overall buying behavior. AI is leading and empowering new competitors. It’s disrupting current business processes and opening opportunities for new business models. Governments are taking action to deliver new regulations on AI.

    During the last decade, we’ve seen the disruptive potential of AI across industries. Now, a new generation of AI models is taking this power to the next level. Generative AI is capable of delivering content and insights with unparalleled results, and this technology changes how we work. Business leaders are already strategizing to implement generative AI to boost productivity. However, keep in mind that AI works best as a copilot, that is, as a guide to help you achieve better results. AI amplifies your expertise and skills.

    Value proposition

    What do you want to offer your customers? You must consider the benefits and functionalities that your AI-powered products and services will deliver to your clients. There may be opportunities to improve their customer experience by improving a service or by adding new features. AI may help you be more efficient and, allowing you to deliver your solution at a more competitive price. Perhaps it’s time to embrace new business lines opened up by AI. When writing your value proposition, be realistic and take into account costs of production and delivery, since they have a direct impact on the customer experience. The overall goal is to decide how to meet external challenges and leverage key opportunities.

    Organization and execution

    The most powerful, disruptive value proposition will amount to nothing if you’re not ready to deliver it to term. You must be sure that your organization has the capabilities and resources to embrace your AI strategy plan. Your goals will likely require deep organizational changes so everyone in the company can fulfill their new role. So, there needs to be alignment between people and processes to empower employees with the adequate AI-related competencies. This task involves growing an AI-ready culture.

    Next, let’s focus on this third element, organization and execution. Let’s explore how to prepare your organization to embrace AI and become an AI-ready company.

    https://lernix.com.my/ibm-certification-malaysia

  • Design a system for AI governance

    Each organization has their own guiding principles, but ultimately these principles need to be part of a larger responsible AI strategy to be effective. This strategy should encompass how your organization brings these principles to life both within your organization and beyond.

    We recommend establishing a governance system that is tailored to your organization’s unique characteristics, culture, guiding principles, and level of engagement with AI. The tasks of the board should include designing responsible AI policies and measures; attending they’re being followed, and ensuring compliance.

    To help your organization get started, we have provided an overview of three common governance approaches: hiring a Chief Ethics Officer, establishing an ethics office, and forming an ethics committee. The first approach is centralized, and the others are decentralized. All of them have their benefits, but we recommend combining them in a hybrid approach. A governance system that reports to the board of directors and has financial support, human resources, and authority is more likely to create real change across an organization.

    Chief Ethics Officer

    Photograph showing woman who is a Chief Ethics Officer.

    Often organizations choose to consolidate their ethics initiatives appointing a Chief Ethics Officer. This option has the advantage of centralized decision-making, so it enables organizations to quickly develop policies around ethics while ensuring there’s accountability for each decision. Hiring this public-facing role can also be an effective way to showcase a company’s commitment to engage with AI and other technology in a responsible and trustworthy manner.

    However, a Chief Ethics Officer alone may struggle to implement measures across an organization without the support of an ethics office. This drawback leads us to the next option.

    Ethics office

    Photograph showing people holding discussion in a team meeting.

    The second governance approach focuses on empowering employees across the organization. It involves forming a dedicated ethics team from different levels of the organization that is solely focused on ensuring the ethical principles are being followed by all employees. The ethics office can be independent or part of a broader risk, compliance, or legal team. If it’s independent, it can be established without a leading role, but companies often choose a Chief Ethics Officer to head the office.

    The key advantage of ethics offices is their ability to implement the policies at scale since they have dedicated team members working at all levels of the company. Ethics offices also prove adept at building a culture of integrity within an organization.

    Ethics committee

    Photograph showing people in a virtual video meeting in a conference room.

    The last approach brings together a diverse array of outside experts and senior leaders from within the organization to address AI ethics. Ethics committees may even incorporate user groups, ethicists, or psychologists. Generally, they don’t have members dedicated solely to ethics.

    This form of governance provides an organization with perspectives from people with a wide range of diverse backgrounds and expertise, unbiased opinions from external members, and buy-in from senior leaders across the company.

    Next, let’s discuss best practices for AI governance, depending on the ownership of the AI model and the role involved.

    https://lernix.com.my/istqb-certification-malaysia

  • Identify guiding principles for responsible AI

    In the last unit, we discussed some of the societal implications of AI. We touched on the responsibility of businesses, governments, NGOs, and academic researchers to anticipate and mitigate unintended consequences of AI technology. As organizations consider these responsibilities, more are creating internal policies and practices to guide their AI efforts.

    At Microsoft, we’ve recognized six principles that we believe should guide AI development and use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For us, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in the products and services we use every day.

    Fairness

    Icon representing fairness.

    AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone with similar symptoms, financial circumstances, or professional qualifications.

    To ensure fairness in your AI system, you should:

    • Understand the scope, spirit, and potential uses of the AI system by asking questions such as, how is the system intended to work? Who is the system designed to work for? Will the system work for everyone equally? How can it harm others?
    • Attract a diverse pool of talent. Ensure the design team reflects the world in which we live by including team members that have different backgrounds, experiences, education, and perspectives.
    • Identify bias in datasets by evaluating where the data came from, understanding how it was organized, and testing to ensure it’s represented. Bias can be introduced at every stage in creation, from collection to modeling to operation. The Responsible AI Dashboard, available at the Resources section, includes a feature to help with this task.
    • Identify bias in machine learning algorithms by applying tools and techniques that improve the transparency and intelligibility of models. Users should actively identify and remove bias in machine learning algorithms.
    • Leverage human review and domain expertise. Train employees to understand the meaning and implications of AI results, especially when AI is used to inform consequential decisions about people. Decisions that use AI should always be paired with human review. Include relevant subject matter experts in the design process and in deployment decisions. An example would be including a consumer credit subject matter expert for a credit scoring AI system. You should use AI as a copilot, that is, an assisting tool that helps you do your job better and faster but requires some degree of supervising.
    • Research and employ best practices, analytical techniques, and tools from other institutions and enterprises to help detect, prevent, and address bias in AI systems.

    Reliability and safety

    Icon representing reliability.

    To build trust, it’s critical that AI systems operate reliably, safely, and consistently under normal circumstances and in unexpected conditions. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. It’s also important to be able to verify that these systems are behaving as intended under actual operating conditions. How they behave and the variety of conditions they can handle reliably and safely largely reflects the range of situations and circumstances that developers anticipate during design and testing.

    To ensure reliability and safety in your AI system, you should:

    • Develop processes for auditing AI systems to evaluate the quality and suitability of data and models, monitor ongoing performance, and verify that systems are behaving as intended based on established performance measures.
    • Provide detailed explanation of system operation including design specifications, information about training data, training failures that occurred and potential inadequacies with training data, and the inferences and significant predictions generated.
    • Design for unintended circumstances such as accidental system interactions, the introduction of malicious data, or cyberattacks.
    • Involve domain experts in the design and implementation processes, especially when using AI to help make consequential decisions about people.
    • Conduct rigorous testing during AI system development and deployment to ensure that systems can respond safely to unanticipated circumstances, don’t have unexpected performance failures, and don’t evolve in unexpected ways. AI systems involved in high-stakes scenarios that affect human safety or large populations should be tested both in lab and real-world scenarios.
    • Evaluate when and how an AI system should seek human input for impactful decisions or during critical situations. Consider how an AI system should transfer control to a human in a manner that is meaningful and intelligible. Design AI systems to ensure humans have the necessary level of input on highly impactful decisions.
    • Develop a robust feedback mechanism for users to report performance issues so that you can resolve them quickly.

    Privacy and security

    Icon representing privacy.

    As AI becomes more prevalent, protecting privacy and securing important personal and business information is becoming more critical and complex. With AI, privacy and data security issues require especially close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people.

    To ensure privacy and security in your AI system, you should:

    • Comply with relevant data protection, privacy, and transparency laws by investing resources in developing compliance technologies and processes or working with a technology leader during the development of AI systems. Develop processes to continually check that the AI systems are satisfying all aspects of these laws.
    • Design AI systems to maintain the integrity of personal data so that they can only use personal data during the time it’s required and for the defined purposes that have been shared with customers. Delete inadvertently collected personal data or data that is no longer relevant to the defined purpose.
    • Protect AI systems from bad actors by designing AI systems in accordance with secure development and operations foundations, using role-based access, and protecting personal and confidential data that is transferred to third parties. Design AI systems to identify abnormal behaviors and to prevent manipulation and malicious attacks.
    • Design AI systems with appropriate controls for customers to make choices about how and why their data is collected and used.
    • Ensure your AI system maintains anonymity by taking into account how the system removes personal identification from data.
    • Conduct privacy and security reviews for all AI systems.
    • Research and implement industry best practices for tracking relevant information about customer data, accessing and using that data, and auditing access and use.

    Inclusiveness

    Icon representing inclusiveness.

    At Microsoft, we firmly believe everyone should benefit from intelligent technology, meaning it must incorporate and address a broad range of human needs and experiences. For the 1 billion people with disabilities around the world, AI technologies can be a game-changer. AI can improve access to education, government services, employment, information, and a wide range of other opportunities. Intelligent solutions such as real-time speech to text transcription, visual recognition services, and predictive text functionality are already empowering people with hearing, visual, and other impairments.

    Microsoft inclusive design principles:

    • Recognize exclusion
    • Solve for one, extend to many
    • Learn from diversity

    To ensure inclusiveness in your AI system, you should:

    • Comply with laws regarding accessibility and inclusiveness that mandate the procurement of accessible technology.
    • Use the Inclusive 101 Guidebook, available in the resources section of this module, to help system developers understand and address potential barriers in a product environment that could unintentionally exclude people.
    • Have people with disabilities test your systems to help you figure out whether the system can be used as intended by the broadest possible audience.
    • Consider commonly used accessibility standards to help ensure your system is accessible for people of all abilities.

    Transparency

    Icon representing transparency.

    Underlying the preceding values are two foundational principles that are essential for ensuring the effectiveness of the rest: transparency and accountability. It’s critical that people understand how AI systems come to conclusions when they’re used to inform decisions that have an effect on people’s lives. For example, a bank might use an AI system to decide whether a person is creditworthy, or a company might use an AI system to determine the most qualified candidates to hire.

    A crucial part of transparency is what we refer to as intelligibility, or the useful explanation of the behavior of AI systems and their components. Improving intelligibility requires that stakeholders comprehend how and why they function so that they can identify potential performance issues, safety and privacy concerns, biases, exclusionary practices, or unintended outcomes. We also believe that people who use AI systems should be honest and forthcoming about when, why, and how they choose to deploy them.

    To ensure transparency in your AI system, you should:

    • Share key characteristics of datasets to help developers understand if a specific dataset is appropriate for their use case.
    • Improve model intelligibility by applying simpler models and generating intelligible explanations of the model’s behavior. For this task, you can use the Responsible AI Dashboard, available at the resources section.
    • Train employees on how to interpret AI outputs and ensure that they remain accountable for making consequential decisions based on the results.

    Accountability

    Icon representing accountability.

    The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren’t the final authority on any decision that impacts people’s lives and that humans maintain meaningful control over otherwise highly autonomous AI systems.

    To ensure accountability in your AI system, you should:

    • Set up internal review boards to provide oversight and guidance on the responsible development and deployment of AI systems. They can also help with tasks like defining best practices for documenting and testing AI systems during development or providing guidance for sensitive cases.
    • Ensure your employees are trained to use and maintain the solution in a responsible and ethical manner and understand when the solution may require extra technical support.
    • Keep humans with requisite expertise in the loop by reporting to them and involving them in decisions about model execution. When automation of decisions is required, ensure they’re able to inspect, identify, and resolve challenges with model output and execution.
    • Put in place a clear system of accountability and governance to conduct remediation or correction activities if models are seen as behaving in an unfair or potentially harmful manner.

    We recognize that every individual, company, and region has their own beliefs and standards that should be reflected in their AI journey. We share our perspective with you as you consider developing your own guiding principles.

     https://lernix.com.my/juniper-certification-malaysia

  • Prepare for the implications of responsible AI

    AI is the defining technology of our time. It’s already enabling faster and more profound progress in nearly every field of human endeavor and helping to address some of society’s most daunting challenges. For example, AI can help people with visual disabilities understand images by generating descriptive text for images. In another example, AI can help farmers produce enough food for the growing global population.

    At Microsoft, we believe that the computational intelligence of AI should be used to amplify the innate creativity and ingenuity of humans. Our vision for AI is to empower every developer to innovate, empower organizations to transform industries, and empower people to transform society.

    Societal implications of AI

    As with all great technological innovations in the past, the use of AI technology has broad impacts on society, raising complex and challenging questions about the future we want to see. AI has implications on decision-making across industries, data security and privacy, and the skills people need to succeed in the workplace. As we look to this future, we must ask ourselves:

    • How do we design, build, and use AI systems that create a positive impact on individuals and society?
    • How can we best prepare workers for the effects of AI?
    • How can we attain the benefits of AI while respecting privacy?

    The importance of a responsible approach to AI

    It’s important to recognize that as new intelligent technology emerges and proliferates throughout society, with its benefits come unintended and unforeseen consequences. Some of these consequences have significant ethical ramifications and the potential to cause serious harm. While organizations can’t predict the future yet, it’s our responsibility to make a concerted effort to anticipate and mitigate the unintended consequences of the technology we release into the world through deliberate planning and continual oversight.

    Threats

    Each breakthrough in AI technologies brings a new reminder of our shared responsibility. For example, in 2016, Microsoft released a chatbot on X called Tay, which could learn from interactions with X users. The goal was to enable the chatbot to better replicate human communication and personality traits. However, within 24 hours, users realized that the chatbot could learn from bigoted rhetoric, and turned the chatbot into a vehicle for hate speech. This experience is one example of why we must consider human threats when designing AI systems.

    Novel threats require a constant evolution in our approach to responsible AI. For example, because generative AI enables people to create or edit videos, images, or audio files so credibly that they look real, media authenticity is harder to verify. In response, Microsoft is teaming with other technology and news stakeholders to develop technical standards to address deepfake-related manipulation.

    https://lernix.com.my/microsoft-certification-malaysia

  • When to use Blazor

    Blazor is a fully featured web UI framework designed to handle the needs of most modern web apps. But whether Blazor is the right framework for you depends on many factors.

    You should consider using Blazor for web development if:

    • You’re looking for a highly productive full stack web development solution.
    • You need to deliver web experiences quickly without the need for a separate frontend development team.
    • You’re already using .NET, and you want to apply your existing .NET skills and resources on the web.
    • You need a high-performance and highly scalable backend to power your web app.

    Blazor might not be a good fit if:

    • You need to fully optimize download size and load time of client-side assets.
    • You need to integrate heavily with a different frontend framework ecosystem.
    • You need to support older web browsers that don’t support the modern web platform.

    https://lernix.com.my/nutanix-certification-malaysia