Author: ultroni1

  • Eventstream sources and destinations

    Once you create an eventstream in Fabric, you can connect it to a wide range of data sources, optionally transform it, and route the transformed, or processed data to multiple destinations. In this unit, we’ll review eventstream sources and destinations.

    Eventstream sources

    You can stream data from Microsoft sources and also ingest data from non-Microsoft platforms including:

    • Microsoft sources, like Azure Event Hubs, Azure IoT Hubs, Azure Service Bus, Change Data Capture (CDC) feeds in database services, and others.
    • Azure events, like Azure Blob Storage events.
    • Fabric events, such as changes to items in a Fabric workspace, data changes in OneLake data stores, and events associated with Fabric jobs.
    • External sources, such as Apache Kafka, Google Cloud Pub/Sub, and MQTT (Message Queuing Telemetry Transport)

    Configure eventstream sources

    After you create an eventstream, you can add data sources using the eventstream canvas. You can either create a new source or connect to an existing source from the Real-Time Hub:

    Screenshot showing how to configure sources in eventstream canvas.

     Tip

    To see all supported sources, see Add and manage eventstream sources.

    Eventstream destinations

    Streaming data requires immediate processing and storage to retain its value. Destinations in an eventstream serve as endpoints where your processed data becomes available for queries, reports, dashboards, alerts, actions, or integration with other systems. You can load the data from your stream into the following destinations:

    • Eventhouse: This destination lets you ingest your real-time event data into an Eventhouse, where you can use Kusto Query Language (KQL) to query and analyze the data.
    • Lakehouse: This destination gives you the ability to transform your real-time events before ingesting them into your lakehouse. Real-time events are converted into Delta Lake format and then stored in designated lakehouse tables.
    • Derived stream: You can think of derived streams as transformed versions of your original data stream that enable content-based routing. Derived streams let you route subsets of data from your default or original stream to different destinations based on the content of data. For example, you could filter IoT sensor data to send high-temperature alerts to Fabric Activator while routing hourly averages to a KQL database.
    • Fabric Activator: Directly connect your real-time event data to an event detection engine that automatically triggers actions when specific patterns or conditions are detected in your streaming data. When data reaches certain thresholds or matches patterns, Activator can send notifications, launch Power Automate workflows, or trigger other automated responses.
    • Custom endpoint: With this destination, you can route your real-time events to a custom endpoint. This destination is useful when you want to direct real-time data to an external system or custom application outside Microsoft Fabric.

    You can attach to multiple destinations within an event stream at the same time without impacting or colliding with each other.

     Tip

    For more information about supported destinations, see Add and manage a destination in an eventstream.

    Configure eventstream destinations

    Eventstream destinations can be configured in the eventstream canvas. A destination can be specified after a datasource is connected or after optional transformations are applied.

    Screenshot showing how to configure destinations in eventstream canvas.

    The eventstream canvas in the image shows:

    • Add destination dropdown: for configuring new destinations
    • Three configured destinations: a derived stream, a Fabric Activator, and an Eventhouse
    • Content-based routing where the output of the GroupByStreet transformation is routed to a derived stream that’s then routed to both an Activator to check if there are bikes at every station and to an Eventhouse to insert bike counts by street into a KQL database

    https://cosmicnext.com/sample-page

  • Components of Eventstream

    The Eventstream feature in Fabric works by creating a pipeline that ingests events from streaming data sources, processes them through optional transformations, and delivers them to various destinations. Eventstream is the delivery mechanism that carries events from where they happen to where they need to be processed, analyzed, or acted upon.

    You can use the eventstream canvas, which is a visual editor, to design your pipeline by dragging and dropping different nodes, such as sources, transformations, and destinations. You can also see the event data flowing through the pipeline in real-time. You don’t need to write any code or manage any infrastructure to use Eventstream.

    Screenshot of an eventstream.

    This image shows the eventstream canvas. There’s a real-time data source called Bicycles, which includes: city bike rental data including bike locations, bike station street names and more. Bicycle-data is an eventstream that ingests data from the Bicycles data source. The data is transformed by an operation named GroupByStreet that sums the number of bikes by bike station street name. This data is stored in a table in an Eventhouse called Bikes-by-street-table.

    The main components of an eventstream are:

    • Sources: Sources are where your event data comes from. You can stream data from Microsoft sources and also ingest data from non-Microsoft platforms.
    • Transformations: You can transform the data as it flows in an eventstream, enabling you to filter, summarize, and reshape it before storing it. Examples of available transformations include: SQL code, filter, manage fields, aggregate, group by, expand and join.
    • Destinations: Destinations are where your transformed event data goes for storage, further processing, alerts, or integration with other systems. You can route the data from your stream to various destinations such as tables in an Eventhouse or lakehouse, custom endpoints, derived streams for more processing, or Fabric Activator to trigger actions.

    https://lernix.com.my/vmware-vsphere-install-configure-manage-v8-icm-malaysia

  • Responsible AI

    https://learn-video.azurefd.net/vod/player?id=372c6894-f6c9-47c9-a5cd-341ed5ad2e85&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fget-started-ai-fundamentals%2F7-responsible-ai

    Key points to understand about responsible AI include:

    • Fairness: AI models are trained using data, which is generally sourced and selected by humans. There’s substantial risk that the data selection criteria, or the data itself reflects unconscious bias that may cause a model to produce discriminatory outputs. AI developers need to take care to minimize bias in training data and test AI systems for fairness.
    • Reliability and safety: AI is based on probabilistic models, it is not infallible. AI-powered applications need to take this into account and mitigate risks accordingly.
    • Privacy and security: Models are trained using data, which may include personal information. AI developers have a responsibility to ensure that the training data is kept secure, and that the trained models themselves can’t be used to reveal private personal or organizational details.
    • Inclusiveness: The potential of AI to improve lives and drive success should be open to everyone. AI developers should strive to ensure that their solutions don’t exclude some users.
    • Transparency: AI can sometimes seem like “magic”, but it’s important to make users aware of how the system works and any potential limitations it may have.
    • Accountability: Ultimately, the people and organizations that develop and distribute AI solutions are accountable for their actions. It’s important for organizations developing AI models and applications to define and apply a framework of governance to help ensure that they apply responsible AI principles to their work.

    Responsible AI examples

    Some example of scenarios where responsible AI practices should be applied include:

    • An AI-powered college admissions system should be tested to ensure it evaluates all applications fairly, taking into account relevant academic criteria but avoiding unfounded discrimination based on irrelevant demographic factors.
    • An AI-powered robotic solution that uses computer vision to detect objects should avoid unintentional harm or damage. One way to accomplish this goal is to use probability values to determine “confidence” in object identification before interacting with physical objects, and avoid any action if the confidence level is below a specific threshold.
    • A facial identification system used in an airport or other secure area should delete personal images that are used for temporary access as soon as they’re no longer required. Additionally, safeguards should prevent the images being made accessible to operators or users who have no need to view them.
    • A web-based chatbot that offers speech-based interaction should also generate text captions to avoid making the system unusable for users with a hearing impairment.
    • A bank that uses an AI-based loan-approval application should disclose the use of AI, and describe features of the data on which it was trained (without revealing confidential information).

    https://lernix.com.my/ccna-certification-training-courses-malaysia

  • Generative AI

    https://learn-video.azurefd.net/vod/player?id=03981c11-0f4f-4737-9ca3-16e4423c6c3d&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fget-started-ai-fundamentals%2F2-generative-ai

    Key points to understand about generative AI include:

    • Generative AI is a branch of AI that enables software applications to generate new content; often natural language dialogs, but also images, video, code, and other formats.
    • The ability to generate content is based on a language model, which has been trained with huge volumes of data – often documents from the Internet or other public sources of information.
    • Generative AI models encapsulate semantic relationships between language elements (that’s a fancy way of saying that the models “know” how words relate to one another), and that’s what enables them to generate a meaningful sequence of text.
    • There are large language models (LLMs) and small language models (SLMs) – the difference is based on the volume of data and the number of variables in the model. LLMs are very powerful and generalize well, but can be more costly to train and use. SLMs tend to work well in scenarios that are more focused on specific topic areas, and usually cost less.

    Generative AI scenarios

    Common uses of generative AI include:

    • Implementing chatbots and AI agents that assist human users.
    • Creating new documents or other content (often as a starting point for further iterative development)
    • Automated translation of text between languages.
    • Summarizing or explaining complex documents.

    https://lernix.com.my/clients

  • Extract data and insights

    https://learn-video.azurefd.net/vod/player?id=fa73472a-9a31-4123-86fb-438bf3c6e438&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fget-started-ai-fundamentals%2F6-extract-insights

    Key points to understand about using AI to extract data and insights include:

    • The basis for most document analysis solutions is a computer vision technology called optical character recognition (OCR).
    • While an OCR model can identify the location of text in an image, more advanced models can also interpret individual values in the document – and so extract specific fields.
    • While most data extraction models have historically focused on extracting fields from text-based forms, more advanced models that can extract information from audio recording, images, and videos are becoming more readily available.

    Data and insight extraction scenarios

    Common uses of AI to extract data and insights include:

    • Automated processing of forms and other documents in a business process – for example, processing an expense claim.
    • Large-scale digitization of data from paper forms. For example, scanning and archiving census records.
    • Indexing documents for search.
    • Identifying key points and follow-up actions from meeting transcripts or recordings.

    https://lernix.com.my/index

  • Natural language processing

    https://learn-video.azurefd.net/vod/player?id=d6fb5ac7-e41e-48c0-9b9f-5547083128aa&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fget-started-ai-fundamentals%2F5-natural-language-processing

    Key points to understand about natural language processing (NLP) include:

    • NLP capabilities are based on models that are trained to do particular types of text analysis.
    • While many natural language processing scenarios are handled by generative AI models today, there are many common text analytics use cases where simpler NLP language models can be more cost-effective.
    • Common NLP tasks include:
      • Entity extraction – identifying mentions of entities like people, places, organizations in a document
      • Text classification – assigning document to a specific category.
      • Sentiment analysis – determining whether a body of text is positive, negative, or neutral and inferring opinions.
      • Language detection – identifying the language in which text is written.

     Note

    In this module, we’ve used the term natural language processing (NLP) to describe AI capabilities that derive meaning from “ordinary” human language. You might also see this area of AI referred to as natural language understanding (NLU).

    Natural language processing scenarios

    Common uses of NLP technologies include:

    • Analyzing document or transcripts of calls and meetings to determine key subjects and identify specific mentions of people, places, organizations, products, or other entities.
    • Analyzing social media posts, product reviews, or articles to evaluate sentiment and opinion.
    • Implementing chatbots that can answer frequently asked questions or orchestrate predictable conversational dialogs that don’t require the complexity of generative AI.

    https://lernix.com.my/comptia-training-courses-malaysia

  • Speech

    https://learn-video.azurefd.net/vod/player?id=30cbfbf5-2be1-4148-8af6-580edc011940&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fget-started-ai-fundamentals%2F4-speech

    Key points to understand about speech include:

    • Speech recognition is the ability of AI to “hear” and interpret speech. Usually this capability takes the form of speech-to-text (where the audio signal for the speech is transcribed into text).
    • Speech synthesis is the ability of AI to vocalize words as spoken language. Usually this capability takes the form of text-to-speech in which information in text format is converted into an audible signal.
    • AI speech technology is evolving rapidly to handle challenges like ignoring background noise, detecting interruptions, and generating increasingly expressive and human-like voices.

    AI speech scenarios

    Common uses of AI speech technologies include:

    • Personal AI assistants in phones, computers, or household devices with which you interact by talking.
    • Automated transcription of calls or meetings.
    • Automating audio descriptions of video or text.
    • Automated speech translation between languages.

    https://lernix.com.my/exchange-server-certification-training-courses-malaysia

  • Computer vision

    Computer vision

    Completed100 XP

    • 4 minutes
    https://learn-video.azurefd.net/vod/player?id=98eae613-5c19-4099-af58-43c9694c7a05&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fget-started-ai-fundamentals%2F3-computer-vision

    Key points to understand about computer vision include:

    • Computer vision is accomplished by using large numbers of images to train a model.
    • Image classification is a form of computer vision in which a model is trained with images that are labeled with the main subject of the image (in other words, what it’s an image of) so that it can analyze unlabeled images and predict the most appropriate label – identifying the subject of the image.
    • Object detection is a form of computer vision in which the model is trained to identify the location of specific objects in an image.
    • There are more advanced forms of computer vision – for example, semantic segmentation is an advanced form of object detection where, rather than indicate an object’s location by drawing a box around it, the model can identify the individual pixels in the image that belong to a particular object.
    • You can combine computer vision and language models to create a multi-modal model that combines computer vision and generative AI capabilities.

    Computer vision scenarios

    Common uses of computer vision include:

    • Auto-captioning or tag-generation for photographs.
    • Visual search.
    • Monitoring stock levels or identifying items for checkout in retail scenarios.
    • Security video monitoring.
    • Authentication through facial recognition.
    • Robotics and self-driving vehicles.

    https://lernix.com.my/dynamics-365-training-courses-malaysia

  • Generative AI

    https://learn-video.azurefd.net/vod/player?id=03981c11-0f4f-4737-9ca3-16e4423c6c3d&locale=en-us&embedUrl=%2Ftraining%2Fmodules%2Fget-started-ai-fundamentals%2F2-generative-ai

    Key points to understand about generative AI include:

    • Generative AI is a branch of AI that enables software applications to generate new content; often natural language dialogs, but also images, video, code, and other formats.
    • The ability to generate content is based on a language model, which has been trained with huge volumes of data – often documents from the Internet or other public sources of information.
    • Generative AI models encapsulate semantic relationships between language elements (that’s a fancy way of saying that the models “know” how words relate to one another), and that’s what enables them to generate a meaningful sequence of text.
    • There are large language models (LLMs) and small language models (SLMs) – the difference is based on the volume of data and the number of variables in the model. LLMs are very powerful and generalize well, but can be more costly to train and use. SLMs tend to work well in scenarios that are more focused on specific topic areas, and usually cost less.

    Generative AI scenarios

    Common uses of generative AI include:

    • Implementing chatbots and AI agents that assist human users.
    • Creating new documents or other content (often as a starting point for further iterative development)
    • Automated translation of text between languages.
    • Summarizing or explaining complex documents.

    https://lernix.com.my/dynamics-365-supply-chain-training-courses-malaysia

  • Materialized views and stored functions

    Now that you understand basic KQL querying and optimization techniques, let’s explore materialized views and stored functions in eventhouses.

    Understand materialized views

    Materialized views are precomputed aggregations that solve a common performance challenge in KQL databases. KQL databases in eventhouses often contain millions or billions of rows from streaming data sources like IoT sensors, application logs, and other events. Running aggregation queries across these large datasets can take significant time and computing resources.

    Materialized views store precomputed aggregation results and automatically update them as new data arrives. Instead of recalculating metrics from all historical data every time you query, the materialized view maintains the results and only processes the new data to update the aggregations. This provides instant results for dashboards and reports, even when working with massive datasets.

    How automatic updates work

    A materialized view consists of two parts that work together to provide always-current results:

    • A materialized part: Precomputed aggregation results from data that has already been processed
    • A delta: New data that has arrived since the last background update

    When you query a materialized view, the system automatically combines both parts at query time to give you fresh, up-to-date results. This means materialized views always return current data, regardless of when the background materialization process last ran. Meanwhile, a background process periodically moves data from the delta part into the materialized part, keeping the precomputed results current. This approach provides the speed of precomputed results with the freshness of real-time data.

    Create materialized views

    A materialized view encapsulates a KQL summarize statement that automatically updates as new data arrives. Here’s an example that tracks trip metrics by vendor and day:

    kqlCopy

    .create materialized-view TripsByVendor on table TaxiTrips
    {
        TaxiTrips
        | summarize trips = count(), avg_fare = avg(fare_amount), total_revenue = sum(fare_amount)
        by vendor_id, pickup_date = format_datetime(pickup_datetime, "yyyy-MM-dd")
    }
    

    Query materialized views

    Once created, materialized views can be queried like regular tables:

    kqlCopy

    TripsByVendor
    | where pickup_date >= ago(7d)
    | project pickup_date, vendor_id, trips, avg_fare, total_revenue
    | sort by pickup_date desc, total_revenue desc
    

    Understand stored functions

    KQL includes the ability to encapsulate a query as a function, making it easier to repeat common queries. You can also specify parameters for a function, so you can repeat the same query with variable values.

    Stored functions are useful in eventhouses where you have streaming data and multiple people writing queries. Instead of writing the same filtering or transformation logic repeatedly, you can define it once as a function and reuse it across different queries. Functions also help ensure that calculations are performed consistently when different team members need to apply the same logic to the data.

    Create a function

    kqlCopy

    .create-or-alter function trips_by_min_passenger_count(num_passengers:long)
    {
        TaxiTrips
        | where passenger_count >= num_passengers 
        | project trip_id, pickup_datetime
    }
    

    To call the function, use it like a table. In this example, the trips_by_min_passenger_count function is used to find 10 trips with at least three passengers:

    kqlCopy

    trips_by_min_passenger_count(3)
    | take 10

    https://lernix.com.my/dynamics-365-sales-training-courses-malaysia