Category: Uncategorized

  • Discover the characteristics that foster an AI-ready culture

    A successful AI strategy must consider cultural issues as well as business issues. Becoming an AI-ready organization requires a fundamental transformation in how you do things, how employees relate to each other, what skills they have, and what processes and principles guide your behaviors. This transformation goes to the core of an organization’s culture, and it’s vital for organizations to tackle such transformation with a holistic approach. Leaders should back this cultural change for everyone at the organization to embrace and adopt AI.

    Fostering an AI-ready culture requires:

    • Being a data-driven organization.
    • Empowering people to participate in the AI transformation, and creating an inclusive environment that allows cross-functional, multidisciplinary collaboration.
    • Creating a responsible approach to AI that addresses the challenging questions AI presents.

    Of course, this is only possible with strong leadership that drives change by both adopting the changes this transformation will require and actively supporting people throughout. Below we share our perspective on the changes you need to make to achieve an AI-ready culture.

    Data-driven

    Photograph showing sharing data across your organization and adopting rigorous data practices.

    Any good AI system relies on having the best and most complete data and being able to reason over your entire data estate. In other words, it depends on a matter of integrity and access.

    Access

    Due to data ownership or storage issues, most organizations generate, organize, and use data in a siloed manner. While each department may have a good view of the data coming from their own processes, they may lack other information that could be relevant to their operations.

    For instance, a sales department might not have a complete view of a customer, because they’re missing pieces of data, like e-commerce activity and payment status, which are controlled by other departments. In this case, a seller may make the mistake of trying to sell a customer an insurance policy that they already purchased through an online channel.

    By sharing data across the organization, the sum becomes greater than the parts. It’s no longer each piece of data that matters, but what that data adds up to: a unified view of the customer. With that unified view, you can make better decisions, act more effectively, and provide a better customer experience. Your data estate must be accessible to be useful, whether it’s on-premises, in the cloud, or on the edge.

    Integrity

    The quality of the data is also key. In this example, if the customer data was riddled with errors, like inaccurate contact information, irrelevant data, or duplication, it wouldn’t matter that the data had been unified; the seller could still make significant mistakes in interacting with the customer.

    Just as quality of data is key to creating next-level experiences for customers, it’s also key to successful AI. An AI model is only as good and complete as the data it can operate on and learn from. So, it’s of paramount importance to work in a way that ensures your data is as complete and rigorous as possible.

    In summary, becoming data-driven means acquiring a mindset of data sharing and rigorousness that drives how you work and relate, and ultimately how you collaborate. This enables you to realize the value of AI and better confront the challenges that AI brings.

    Empowered

    Photograph showing three construction workers representing empowerment and inclusivity: providing resources, a collaborative culture, and focusing on business needs.

    Fostering an AI-ready culture means empowering people to be part of the AI transformation. Organizations should provide the following opportunities to achieve this goal:

    • Enablement: Space, resources, guidance, security, and support is needed to improve what people do with AI.
    • Time for learning: Organizations should help people get the knowledge and the skills.
    • Room for experimentation: During this process, you should encourage new ideas and continuous improvement. This experimentation must allow room for errors, as well as celebration and acknowledgment of success.

    It also means to create an inclusive environment, one that is predicated on the willingness and ability of employees to work in cross-functional teams that cut across organizational boundaries.

    Furthermore, it means to make those who best understand the business a central piece of your transformation process. Data scientists working in isolation often create models that lack the business knowledge, purpose, or value that would make them an effective AI resource. Similarly, business people working in isolation lack the technical knowledge to understand what can be done from a data science perspective. A multidisciplinary approach is important.

    By enabling cross-functional teams that include both data scientists and the business employees closest to the business need, you can create powerful and effective AI solutions. An example of this is our hugely successful compliance predictive analytics tools, which were inspired and developed by employees working on our finance teams. They were successful only because they were created with the insights of those closest to the business need. This example illustrates how powerful it’s to create an inclusive, cross-organizational collaborative approach.

    Responsible

    Photograph showing person stamping a paper to represent a culture of review and responsible AI.

    The third key element of an AI-ready culture is fostering a responsible approach to AI. As AI continues to evolve, it has the potential to drive considerable changes to our lives, raising complex and challenging questions about what future we want to see.

    Like a Corporate Vice President of Strategic Missions and Technologies at Microsoft says: the question very often is not what AI can do, it’s what AI should do. Organizations need to ask themselves: How do we design, build, and use AI systems to create a positive impact on individuals and society? How can we ensure that AI systems treat everyone fairly? How can we best prepare the workforce for the new AI era?

    These questions demand for organizations to think about their AI principles and how to ensure them throughout the company. To ensure responsible AI practices, specific planning is required that should include an AI governance model. In this way, you can deliver transparent, explainable, and ethical AI. The module Embrace responsible AI principles and practices provides a more detailed discussion of the implications of responsible AI for business.

    https://lernix.com.my/isaca-certification-malaysia

  • Create business value with an AI strategy

    There’s excitement stirring around AI. It’s now clear that AI technologies drive substantial value to organizations and should be embraced to keep a competitive edge. However, the complexity underpinning AI may feel intimidating. Any organization needs a solid plan for AI adoption and scaling to fully benefit from AI’s potential. You should consider AI as a tool to reach your business goals and incorporate it into the corporate strategy.

    In Microsoft, we recommend using a holistic framework for AI strategy. This framework applies to all organizations, and provides a sensible approach to AI implementation. This AI strategy framework covers three elements: the external environment that gives you context, the value proposition that you offer to customers, and the executive capabilities of your organization.

    External environment

    Your starting point should be to understand the external industry environment. Right now, it involves measuring how AI is impacting your sector. This technology is shifting overall buying behavior. AI is leading and empowering new competitors. It’s disrupting current business processes and opening opportunities for new business models. Governments are taking action to deliver new regulations on AI.

    During the last decade, we’ve seen the disruptive potential of AI across industries. Now, a new generation of AI models is taking this power to the next level. Generative AI is capable of delivering content and insights with unparalleled results, and this technology changes how we work. Business leaders are already strategizing to implement generative AI to boost productivity. However, keep in mind that AI works best as a copilot, that is, as a guide to help you achieve better results. AI amplifies your expertise and skills.

    Value proposition

    What do you want to offer your customers? You must consider the benefits and functionalities that your AI-powered products and services will deliver to your clients. There may be opportunities to improve their customer experience by improving a service or by adding new features. AI may help you be more efficient and, allowing you to deliver your solution at a more competitive price. Perhaps it’s time to embrace new business lines opened up by AI. When writing your value proposition, be realistic and take into account costs of production and delivery, since they have a direct impact on the customer experience. The overall goal is to decide how to meet external challenges and leverage key opportunities.

    Organization and execution

    The most powerful, disruptive value proposition will amount to nothing if you’re not ready to deliver it to term. You must be sure that your organization has the capabilities and resources to embrace your AI strategy plan. Your goals will likely require deep organizational changes so everyone in the company can fulfill their new role. So, there needs to be alignment between people and processes to empower employees with the adequate AI-related competencies. This task involves growing an AI-ready culture.

    Next, let’s focus on this third element, organization and execution. Let’s explore how to prepare your organization to embrace AI and become an AI-ready company.

    https://lernix.com.my/ibm-certification-malaysia

  • Design a system for AI governance

    Each organization has their own guiding principles, but ultimately these principles need to be part of a larger responsible AI strategy to be effective. This strategy should encompass how your organization brings these principles to life both within your organization and beyond.

    We recommend establishing a governance system that is tailored to your organization’s unique characteristics, culture, guiding principles, and level of engagement with AI. The tasks of the board should include designing responsible AI policies and measures; attending they’re being followed, and ensuring compliance.

    To help your organization get started, we have provided an overview of three common governance approaches: hiring a Chief Ethics Officer, establishing an ethics office, and forming an ethics committee. The first approach is centralized, and the others are decentralized. All of them have their benefits, but we recommend combining them in a hybrid approach. A governance system that reports to the board of directors and has financial support, human resources, and authority is more likely to create real change across an organization.

    Chief Ethics Officer

    Photograph showing woman who is a Chief Ethics Officer.

    Often organizations choose to consolidate their ethics initiatives appointing a Chief Ethics Officer. This option has the advantage of centralized decision-making, so it enables organizations to quickly develop policies around ethics while ensuring there’s accountability for each decision. Hiring this public-facing role can also be an effective way to showcase a company’s commitment to engage with AI and other technology in a responsible and trustworthy manner.

    However, a Chief Ethics Officer alone may struggle to implement measures across an organization without the support of an ethics office. This drawback leads us to the next option.

    Ethics office

    Photograph showing people holding discussion in a team meeting.

    The second governance approach focuses on empowering employees across the organization. It involves forming a dedicated ethics team from different levels of the organization that is solely focused on ensuring the ethical principles are being followed by all employees. The ethics office can be independent or part of a broader risk, compliance, or legal team. If it’s independent, it can be established without a leading role, but companies often choose a Chief Ethics Officer to head the office.

    The key advantage of ethics offices is their ability to implement the policies at scale since they have dedicated team members working at all levels of the company. Ethics offices also prove adept at building a culture of integrity within an organization.

    Ethics committee

    Photograph showing people in a virtual video meeting in a conference room.

    The last approach brings together a diverse array of outside experts and senior leaders from within the organization to address AI ethics. Ethics committees may even incorporate user groups, ethicists, or psychologists. Generally, they don’t have members dedicated solely to ethics.

    This form of governance provides an organization with perspectives from people with a wide range of diverse backgrounds and expertise, unbiased opinions from external members, and buy-in from senior leaders across the company.

    Next, let’s discuss best practices for AI governance, depending on the ownership of the AI model and the role involved.

    https://lernix.com.my/istqb-certification-malaysia

  • Identify guiding principles for responsible AI

    In the last unit, we discussed some of the societal implications of AI. We touched on the responsibility of businesses, governments, NGOs, and academic researchers to anticipate and mitigate unintended consequences of AI technology. As organizations consider these responsibilities, more are creating internal policies and practices to guide their AI efforts.

    At Microsoft, we’ve recognized six principles that we believe should guide AI development and use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For us, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in the products and services we use every day.

    Fairness

    Icon representing fairness.

    AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone with similar symptoms, financial circumstances, or professional qualifications.

    To ensure fairness in your AI system, you should:

    • Understand the scope, spirit, and potential uses of the AI system by asking questions such as, how is the system intended to work? Who is the system designed to work for? Will the system work for everyone equally? How can it harm others?
    • Attract a diverse pool of talent. Ensure the design team reflects the world in which we live by including team members that have different backgrounds, experiences, education, and perspectives.
    • Identify bias in datasets by evaluating where the data came from, understanding how it was organized, and testing to ensure it’s represented. Bias can be introduced at every stage in creation, from collection to modeling to operation. The Responsible AI Dashboard, available at the Resources section, includes a feature to help with this task.
    • Identify bias in machine learning algorithms by applying tools and techniques that improve the transparency and intelligibility of models. Users should actively identify and remove bias in machine learning algorithms.
    • Leverage human review and domain expertise. Train employees to understand the meaning and implications of AI results, especially when AI is used to inform consequential decisions about people. Decisions that use AI should always be paired with human review. Include relevant subject matter experts in the design process and in deployment decisions. An example would be including a consumer credit subject matter expert for a credit scoring AI system. You should use AI as a copilot, that is, an assisting tool that helps you do your job better and faster but requires some degree of supervising.
    • Research and employ best practices, analytical techniques, and tools from other institutions and enterprises to help detect, prevent, and address bias in AI systems.

    Reliability and safety

    Icon representing reliability.

    To build trust, it’s critical that AI systems operate reliably, safely, and consistently under normal circumstances and in unexpected conditions. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. It’s also important to be able to verify that these systems are behaving as intended under actual operating conditions. How they behave and the variety of conditions they can handle reliably and safely largely reflects the range of situations and circumstances that developers anticipate during design and testing.

    To ensure reliability and safety in your AI system, you should:

    • Develop processes for auditing AI systems to evaluate the quality and suitability of data and models, monitor ongoing performance, and verify that systems are behaving as intended based on established performance measures.
    • Provide detailed explanation of system operation including design specifications, information about training data, training failures that occurred and potential inadequacies with training data, and the inferences and significant predictions generated.
    • Design for unintended circumstances such as accidental system interactions, the introduction of malicious data, or cyberattacks.
    • Involve domain experts in the design and implementation processes, especially when using AI to help make consequential decisions about people.
    • Conduct rigorous testing during AI system development and deployment to ensure that systems can respond safely to unanticipated circumstances, don’t have unexpected performance failures, and don’t evolve in unexpected ways. AI systems involved in high-stakes scenarios that affect human safety or large populations should be tested both in lab and real-world scenarios.
    • Evaluate when and how an AI system should seek human input for impactful decisions or during critical situations. Consider how an AI system should transfer control to a human in a manner that is meaningful and intelligible. Design AI systems to ensure humans have the necessary level of input on highly impactful decisions.
    • Develop a robust feedback mechanism for users to report performance issues so that you can resolve them quickly.

    Privacy and security

    Icon representing privacy.

    As AI becomes more prevalent, protecting privacy and securing important personal and business information is becoming more critical and complex. With AI, privacy and data security issues require especially close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people.

    To ensure privacy and security in your AI system, you should:

    • Comply with relevant data protection, privacy, and transparency laws by investing resources in developing compliance technologies and processes or working with a technology leader during the development of AI systems. Develop processes to continually check that the AI systems are satisfying all aspects of these laws.
    • Design AI systems to maintain the integrity of personal data so that they can only use personal data during the time it’s required and for the defined purposes that have been shared with customers. Delete inadvertently collected personal data or data that is no longer relevant to the defined purpose.
    • Protect AI systems from bad actors by designing AI systems in accordance with secure development and operations foundations, using role-based access, and protecting personal and confidential data that is transferred to third parties. Design AI systems to identify abnormal behaviors and to prevent manipulation and malicious attacks.
    • Design AI systems with appropriate controls for customers to make choices about how and why their data is collected and used.
    • Ensure your AI system maintains anonymity by taking into account how the system removes personal identification from data.
    • Conduct privacy and security reviews for all AI systems.
    • Research and implement industry best practices for tracking relevant information about customer data, accessing and using that data, and auditing access and use.

    Inclusiveness

    Icon representing inclusiveness.

    At Microsoft, we firmly believe everyone should benefit from intelligent technology, meaning it must incorporate and address a broad range of human needs and experiences. For the 1 billion people with disabilities around the world, AI technologies can be a game-changer. AI can improve access to education, government services, employment, information, and a wide range of other opportunities. Intelligent solutions such as real-time speech to text transcription, visual recognition services, and predictive text functionality are already empowering people with hearing, visual, and other impairments.

    Microsoft inclusive design principles:

    • Recognize exclusion
    • Solve for one, extend to many
    • Learn from diversity

    To ensure inclusiveness in your AI system, you should:

    • Comply with laws regarding accessibility and inclusiveness that mandate the procurement of accessible technology.
    • Use the Inclusive 101 Guidebook, available in the resources section of this module, to help system developers understand and address potential barriers in a product environment that could unintentionally exclude people.
    • Have people with disabilities test your systems to help you figure out whether the system can be used as intended by the broadest possible audience.
    • Consider commonly used accessibility standards to help ensure your system is accessible for people of all abilities.

    Transparency

    Icon representing transparency.

    Underlying the preceding values are two foundational principles that are essential for ensuring the effectiveness of the rest: transparency and accountability. It’s critical that people understand how AI systems come to conclusions when they’re used to inform decisions that have an effect on people’s lives. For example, a bank might use an AI system to decide whether a person is creditworthy, or a company might use an AI system to determine the most qualified candidates to hire.

    A crucial part of transparency is what we refer to as intelligibility, or the useful explanation of the behavior of AI systems and their components. Improving intelligibility requires that stakeholders comprehend how and why they function so that they can identify potential performance issues, safety and privacy concerns, biases, exclusionary practices, or unintended outcomes. We also believe that people who use AI systems should be honest and forthcoming about when, why, and how they choose to deploy them.

    To ensure transparency in your AI system, you should:

    • Share key characteristics of datasets to help developers understand if a specific dataset is appropriate for their use case.
    • Improve model intelligibility by applying simpler models and generating intelligible explanations of the model’s behavior. For this task, you can use the Responsible AI Dashboard, available at the resources section.
    • Train employees on how to interpret AI outputs and ensure that they remain accountable for making consequential decisions based on the results.

    Accountability

    Icon representing accountability.

    The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren’t the final authority on any decision that impacts people’s lives and that humans maintain meaningful control over otherwise highly autonomous AI systems.

    To ensure accountability in your AI system, you should:

    • Set up internal review boards to provide oversight and guidance on the responsible development and deployment of AI systems. They can also help with tasks like defining best practices for documenting and testing AI systems during development or providing guidance for sensitive cases.
    • Ensure your employees are trained to use and maintain the solution in a responsible and ethical manner and understand when the solution may require extra technical support.
    • Keep humans with requisite expertise in the loop by reporting to them and involving them in decisions about model execution. When automation of decisions is required, ensure they’re able to inspect, identify, and resolve challenges with model output and execution.
    • Put in place a clear system of accountability and governance to conduct remediation or correction activities if models are seen as behaving in an unfair or potentially harmful manner.

    We recognize that every individual, company, and region has their own beliefs and standards that should be reflected in their AI journey. We share our perspective with you as you consider developing your own guiding principles.

     https://lernix.com.my/juniper-certification-malaysia

  • Prepare for the implications of responsible AI

    AI is the defining technology of our time. It’s already enabling faster and more profound progress in nearly every field of human endeavor and helping to address some of society’s most daunting challenges. For example, AI can help people with visual disabilities understand images by generating descriptive text for images. In another example, AI can help farmers produce enough food for the growing global population.

    At Microsoft, we believe that the computational intelligence of AI should be used to amplify the innate creativity and ingenuity of humans. Our vision for AI is to empower every developer to innovate, empower organizations to transform industries, and empower people to transform society.

    Societal implications of AI

    As with all great technological innovations in the past, the use of AI technology has broad impacts on society, raising complex and challenging questions about the future we want to see. AI has implications on decision-making across industries, data security and privacy, and the skills people need to succeed in the workplace. As we look to this future, we must ask ourselves:

    • How do we design, build, and use AI systems that create a positive impact on individuals and society?
    • How can we best prepare workers for the effects of AI?
    • How can we attain the benefits of AI while respecting privacy?

    The importance of a responsible approach to AI

    It’s important to recognize that as new intelligent technology emerges and proliferates throughout society, with its benefits come unintended and unforeseen consequences. Some of these consequences have significant ethical ramifications and the potential to cause serious harm. While organizations can’t predict the future yet, it’s our responsibility to make a concerted effort to anticipate and mitigate the unintended consequences of the technology we release into the world through deliberate planning and continual oversight.

    Threats

    Each breakthrough in AI technologies brings a new reminder of our shared responsibility. For example, in 2016, Microsoft released a chatbot on X called Tay, which could learn from interactions with X users. The goal was to enable the chatbot to better replicate human communication and personality traits. However, within 24 hours, users realized that the chatbot could learn from bigoted rhetoric, and turned the chatbot into a vehicle for hate speech. This experience is one example of why we must consider human threats when designing AI systems.

    Novel threats require a constant evolution in our approach to responsible AI. For example, because generative AI enables people to create or edit videos, images, or audio files so credibly that they look real, media authenticity is harder to verify. In response, Microsoft is teaming with other technology and news stakeholders to develop technical standards to address deepfake-related manipulation.

    https://lernix.com.my/microsoft-certification-malaysia

  • When to use Blazor

    Blazor is a fully featured web UI framework designed to handle the needs of most modern web apps. But whether Blazor is the right framework for you depends on many factors.

    You should consider using Blazor for web development if:

    • You’re looking for a highly productive full stack web development solution.
    • You need to deliver web experiences quickly without the need for a separate frontend development team.
    • You’re already using .NET, and you want to apply your existing .NET skills and resources on the web.
    • You need a high-performance and highly scalable backend to power your web app.

    Blazor might not be a good fit if:

    • You need to fully optimize download size and load time of client-side assets.
    • You need to integrate heavily with a different frontend framework ecosystem.
    • You need to support older web browsers that don’t support the modern web platform.

    https://lernix.com.my/nutanix-certification-malaysia

  • How Blazor works

    Blazor provides many features to help you get started and deliver your next web app project fast. Let’s take a tour of the core capabilities of Blazor to help you decide whether you should use Blazor for your next great web app.

    Blazor components

    Blazor apps are built from components. A Blazor component is a reusable piece of web UI. A Blazor component encapsulates both its rendering and UI event handling logic. Blazor includes various built-in components for form handling, user input validation, displaying large data sets, authentication, and authorization. Developers can also build and share their own custom components, and many prebuilt Blazor components are available from the Blazor ecosystem.

    Use standard web technologies

    You author Blazor components using Razor syntax, a convenient mixture of HTML, CSS, and C#. A Razor file contains plain HTML and then C# to define any rendering logic, like for conditionals, control flow, and expression evaluation. Razor files are then compiled into C# classes that encapsulate the component’s rendering logic. Because Blazor components authored in Razor are just C# classes, you can call arbitrary .NET code from your components.

    UI event handling and data binding

    Interactive Blazor components can handle standard web UI interactions using C# event handlers. Components can update their state in response to UI events and adjust their rendering accordingly. Blazor also includes support for two-way data binding to UI elements as a way to keep component state in sync with UI elements.

    The following example is a simple Blazor counter component implemented in Razor. Most of the content is HTML, while the @code block contains C#. Every time the button is pressed the IncrementCount C# method is invoked, which increments the currentCount field, and then the component renders the updated value:

    razorCopy

    <h1>Counter</h1>
    
    <p role="status">Current count: @currentCount</p>
    
    <button class="btn btn-primary" @onclick="IncrementCount">Click me</button>
    
    @code {
        private int currentCount = 0;
    
        private void IncrementCount()
        {
            currentCount++;
        }
    }

    https://lernix.com.my/about

  • What is Blazor?

    Blazor is a modern frontend web framework based on HTML, CSS, and C# that helps you build web apps faster. With Blazor, you build web apps using reusable components that can be run from both the client and the server so that you can deliver great web experiences. Blazor is part of .NET, a developer platform for building anything. .NET is free, open-source, and runs cross-platform.

    Some of the benefits of using Blazor include:

    • Build web UI fast with reusable components: Blazor’s flexible component model makes it easy to build reusable components that you can use to assemble apps quickly.
    • Add rich interactivity in C#: Handle arbitrary UI events from the browser and implement component logic all in C#, a modern type-safe language that is easy to learn and highly versatile.
    • One development stack: Build your entire web app from the frontend to the backend using a single development stack and share code for common logic on the client and server.
    • Efficient diff-based rendering: As components render, Blazor carefully tracks what parts of the DOM changed, so that UI updates are fast and efficient.
    • Server and client-side rendering: Render components from both the server and the client to implement various web app architectures and deliver the best possible web app experience.
    • Progressively enhanced server rendering: Use built-in support for enhanced navigation & form handling and streaming rendering to progressively enhance the user experience of server rendered web apps.
    • Interop with JavaScript: Use the ecosystem of JavaScript libraries and browser APIs from your C# code.
    • Integrate with existing apps: Integrate Blazor components with an existing MVC, Razor Pages, or JavaScript based apps.
    • Great tooling: Use Visual Studio or Visual Studio Code to get started in seconds and stay productive with great code editing support.
    • Web, mobile, and desktop: Blazor components can also be used to build native mobile & desktop apps using a hybrid of native and web, called Blazor Hybrid.

    https://lernix.com.my/affiliation

  • Understand tag helpers and page handlers

    In the previous unit, you created a Razor Page that displays a list of pizzas. You used the @ symbol to switch contexts between HTML and C#. In this unit, you’ll learn about tag helpers. Tag helpers are a special kind of HTML element that can contain C# code. You’ll also learn about page handlers. Page handlers are methods that handle browser requests. You’ll use page handlers in the next unit to add and delete pizzas.

    Tag helpers

    Tag helpers are used to address the inefficiencies of context switching between HTML and C#. Most of ASP.NET Core’s built-in Tag helpers extend standard HTML elements. Tag helpers provide extra server-side attributes for HTML elements, making the elements more robust.

    There are four tag helpers you should know for this project: PartialLabelInput, and Validation Summary Message.

    Partial Tag Helper

    CSHTMLCopy

    <partial name="_ValidationScriptsPartial" />
    

    This injects the contents of the _ValidationScriptsPartial.cshtml file into a page. The _ValidationScriptsPartial.cshtml file contains JavaScript that’s used to validate form input, so it needs to be included on every page that contains a form.

    Label tag helper

    CSHTMLCopy

    <label asp-for="Foo.Id" class="control-label"></label>
    

    This extends the standard HTML <label> element. Like many tag helpers, it uses an asp-for attribute. The attribute accepts a property from the PageModel. In this case, the name of the PageModel‘s Foo.Id property (specifically, the string "Id") will be rendered as the content for an HTML <label> element.

    Input tag helper

    CSHTMLCopy

    <input asp-for="Foo.Id" class="form-control" />
    

    Similar to the previous example, this extends the standard HTML <input> element. It also uses an asp-for attribute to specify a PageModel property. In this case, the value of the Foo.Id property will be rendered as the value attribute for an HTML <input> element.

    Validation Summary Tag Helper

    CSHTMLCopy

    <div asp-validation-summary="All"></div>
    

    The Validation Summary Tag Helper displays a validation message for a single property on the model.

     Note

    Things like validation rules and property display names are defined in the PageModel class. We’ll point out where to find them in the code in the next unit.

    Page handlers

    The PageModel class defines page handlers for HTTP requests and data used to render the page. In the previous exercise, the PizzaListModel class handled the HTTP GET request by setting the value of the PizzaList property to the value of _service.GetPizzas().

    Common handlers include OnGet for page initialization and OnPost for form submissions. To handle an HTTP POST, a page handler might verify the user-submitted data, present the input form page again if invalid, or send the valid data to a service or database for persistence.

    In the next unit, you’ll add a form to create new pizzas using several tag helpers. You’ll also add page handlers to handle the form submission and deletion of pizzas.

    https://lernix.com.my/rooms

  • Understand when and why to use Razor Pages

    In this unit, you’ll learn when and why to use Razor Pages for your ASP.NET Core app.

    The benefits of Razor Pages

    Razor Pages is a server-side, page-centric programming model for building web UIs with ASP.NET Core. Benefits include:

    • Easy setup for dynamic web apps using HTML, CSS, and C#.
    • Organized files by feature for easier maintenance.
    • Combines markup with server-side C# code using Razor syntax.

    Razor Pages utilize Razor for embedding server-based code into webpages. Razor syntax combines HTML and C# to define the dynamic rendering logic. This means you can use C# variables and methods within your HTML markup to generate dynamic web content on the server at runtime. It’s important to understand that Razor Pages are not a replacement for HTML, CSS, or JavaScript, but rather combines these technologies to create dynamic web content.

    Separation of concerns

    Razor Pages enforces separation of concerns with a C# PageModel class, encapsulating data properties and logic operations scoped to its Razor page, and defining page handlers for HTTP requests. The PageModel class is a partial class that is automatically generated by the ASP.NET Core project template. The PageModel class is located in the Pages folder and is named after the Razor page. For example, the PageModel class for the Index.cshtml Razor page is named IndexModel.cs.

    When to use Razor Pages

    Use Razor Pages in your ASP.NET Core app when you:

    • Want to generate dynamic web UI.
    • Prefer a page-focused approach.
    • Want to reduce duplication with partial views.

    Razor Pages simplifies ASP.NET Core page organization by keeping related pages and their logic together in their own namespace and directory.

    https://lernix.com.my/careers