Category: Uncategorized

  • Create delta tables

    When you create a table in a Microsoft Fabric lakehouse, a delta table is defined in the metastore for the lakehouse and the data for the table is stored in the underlying Parquet files for the table.

    With most interactive tools in the Microsoft Fabric environment, the details of mapping the table definition in the metastore to the underlying files are abstracted. However, when working with Apache Spark in a lakehouse, you have greater control of the creation and management of delta tables.

    Creating a delta table from a dataframe

    One of the easiest ways to create a delta table in Spark is to save a dataframe in the delta format. For example, the following PySpark code loads a dataframe with data from an existing file, and then saves that dataframe as a delta table:

    PythonCopy

    # Load a file into a dataframe
    df = spark.read.load('Files/mydata.csv', format='csv', header=True)
    
    # Save the dataframe as a delta table
    df.write.format("delta").saveAsTable("mytable")
    

    The code specifies that the table should be saved in delta format with a specified table name. The data for the table is saved in Parquet files (regardless of the format of the source file you loaded into the dataframe) in the Tables storage area in the lakehouse, along with a _delta_log folder containing the transaction logs for the table. The table is listed in the Tables folder for the lakehouse in the Data explorer pane.

    Managed vs external tables

    In the previous example, the dataframe was saved as a managed table; meaning that the table definition in the metastore and the underlying data files are both managed by the Spark runtime for the Fabric lakehouse. Deleting the table will also delete the underlying files from the Tables storage location for the lakehouse.

    You can also create tables as external tables, in which the relational table definition in the metastore is mapped to an alternative file storage location. For example, the following code creates an external table for which the data is stored in the folder in the Files storage location for the lakehouse:

    PythonCopy

    df.write.format("delta").saveAsTable("myexternaltable", path="Files/myexternaltable")
    

    In this example, the table definition is created in the metastore (so the table is listed in the Tables user interface for the lakehouse), but the Parquet data files and JSON log files for the table are stored in the Files storage location (and will be shown in the Files node in the Lakehouse explorer pane).

    You can also specify a fully qualified path for a storage location, like this:

    PythonCopy

    df.write.format("delta").saveAsTable("myexternaltable", path="abfss://my_store_url..../myexternaltable")
    

    Deleting an external table from the lakehouse metastore doesn’t delete the associated data files.

    Creating table metadata

    While it’s common to create a table from existing data in a dataframe, there are often scenarios where you want to create a table definition in the metastore that will be populated with data in other ways. There are multiple ways you can accomplish this goal.

    Use the DeltaTableBuilder API

    The DeltaTableBuilder API enables you to write Spark code to create a table based on your specifications. For example, the following code creates a table with a specified name and columns.

    PythonCopy

    from delta.tables import *
    
    DeltaTable.create(spark) \
      .tableName("products") \
      .addColumn("Productid", "INT") \
      .addColumn("ProductName", "STRING") \
      .addColumn("Category", "STRING") \
      .addColumn("Price", "FLOAT") \
      .execute()
    

    Use Spark SQL

    You can also create delta tables by using the Spark SQL CREATE TABLE statement, as shown in this example:

    SQLCopy

    %%sql
    
    CREATE TABLE salesorders
    (
        Orderid INT NOT NULL,
        OrderDate TIMESTAMP NOT NULL,
        CustomerName STRING,
        SalesTotal FLOAT NOT NULL
    )
    USING DELTA
    

    The previous example creates a managed table. You can also create an external table by specifying a LOCATION parameter, as shown here:

    SQLCopy

    %%sql
    
    CREATE TABLE MyExternalTable
    USING DELTA
    LOCATION 'Files/mydata'
    

    When creating an external table, the schema of the table is determined by the Parquet files containing the data in the specified location. This approach can be useful when you want to create a table definition that references data that has already been saved in delta format, or based on a folder where you expect to ingest data in delta format.

    Saving data in delta format

    So far you’ve seen how to save a dataframe as a delta table (creating both the table schema definition in the metastore and the data files in delta format) and how to create the table definition (which creates the table schema in the metastore without saving any data files). A third possibility is to save data in delta format without creating a table definition in the metastore. This approach can be useful when you want to persist the results of data transformations performed in Spark in a file format over which you can later “overlay” a table definition or process directly by using the delta lake API.

    For example, the following PySpark code saves a dataframe to a new folder location in delta format:

    PythonCopy

    delta_path = "Files/mydatatable"
    df.write.format("delta").save(delta_path)
    

    Delta files are saved in Parquet format in the specified path, and include a _delta_log folder containing transaction log files. Transaction logs record any changes in the data, such as updates made to external tables or through the delta lake API.

    You can replace the contents of an existing folder with the data in a dataframe by using the overwrite mode, as shown here:

    PythonCopy

    new_df.write.format("delta").mode("overwrite").save(delta_path)
    

    You can also add rows from a dataframe to an existing folder by using the append mode:

    PythonCopy

    new_rows_df.write.format("delta").mode("append").save(delta_path)

    https://lernix.com.my/ccie-certification-training-courses-malaysia

  • Understand Delta Lake

    Delta Lake is an open-source storage layer that adds relational database semantics to Spark-based data lake processing. Tables in Microsoft Fabric lakehouses are Delta tables, which is signified by the triangular Delta (Δ) icon on tables in the lakehouse user interface.

    Screenshot of the salesorders table viewed in the Lakehouse explorer in Microsoft Fabric.

    Delta tables are schema abstractions over data files that are stored in Delta format. For each table, the lakehouse stores a folder containing Parquet data files and a _delta_Log folder in which transaction details are logged in JSON format.

    Screenshot of the files view of the parquet files in the salesorders table viewed through Lakehouse explorer.

    The benefits of using Delta tables include:

    • Relational tables that support querying and data modification. With Apache Spark, you can store data in Delta tables that support CRUD (create, read, update, and delete) operations. In other words, you can selectinsertupdate, and delete rows of data in the same way you would in a relational database system.
    • Support for ACID transactions. Relational databases are designed to support transactional data modifications that provide atomicity (transactions complete as a single unit of work), consistency (transactions leave the database in a consistent state), isolation (in-process transactions can’t interfere with one another), and durability (when a transaction completes, the changes it made are persisted). Delta Lake brings this same transactional support to Spark by implementing a transaction log and enforcing serializable isolation for concurrent operations.
    • Data versioning and time travel. Because all transactions are logged in the transaction log, you can track multiple versions of each table row and even use the time travel feature to retrieve a previous version of a row in a query.
    • Support for batch and streaming data. While most relational databases include tables that store static data, Spark includes native support for streaming data through the Spark Structured Streaming API. Delta Lake tables can be used as both sinks (destinations) and sources for streaming data.
    • Standard formats and interoperability. The underlying data for Delta tables is stored in Parquet format, which is commonly used in data lake ingestion pipelines. Additionally, you can use the SQL analytics endpoint for the Microsoft Fabric lakehouse to query Delta tables in SQL.

    https://lernix.com.my/oracle-cloud-infrastructure-training-courses-malaysia

  • Apply granular permissions

    When the permissions provided by workspace roles or item permissions are insufficient, granular permissions like table and row-level security and file and folder access can be set through the:

    • SQL analytics endpoint
    • OneLake data access roles (preview)
    • Warehouse
    • Semantic model

    Configure data access through the SQL analytics endpoint in a lakehouse

    Data in a lakehouse can be read through the SQL analytics endpoint. Each Lakehouse has an autogenerated SQL analytics endpoint that can be used to transition between the lake view of the lakehouse and the SQL view of the lakehouse. The lake view supports data engineering and Apache Spark and the SQL view of the same lakehouse allows you to create views, functions, stored procedures and to apply SQL security and object level permissions.

    Data in a Fabric lakehouse is stored with the following folder structure:

    • /Files
    • /Tables

    View the SQL analytics endpoint view of the lakehouse

    The SQL analytics endpoint is used to read data in the /Tables folder of the lakehouse using T-SQL.

    Screenshot of SQL analytics endpoint view.

    Apply granular permissions to the lakehouse using T-SQL

    Using the SQL analytics endpoint, granular T-SQL permissions can be applied to SQL objects using Data Control Language (DCL) commands such as:

    Row-level security, column-level security, and dynamic data masking can also be applied using the SQL analytics endpoint. See:

    Configure data access through the lake view of the lakehouse

    The lake view of the lakehouse is used to read data in the /Tables and /Files folder of the lakehouse.

    Screenshot of files in lakehouse.

    Use OneLake data access roles to secure data

    Workspace and item permissions provide coarse access to data in a lakehouse. To further refine data access, folders in the lake view of the lakehouse can be secured using OneLake data access roles (preview). You can create custom roles within a lakehouse and grant read permissions only to specific folders in OneLake. Folder security is inheritable to all subfolders. To create a custom OneLake data access role:

    1. Select Manage OneLake data access (preview) from the menu in the lake view of the lakehouse. 
    2. In the New Role window, create a new role name and select the folders to grant access to.
    3. Once the role is created, assign a user or group to the role and select the permissions to assign.

     Tip

    For more information on how OneLake RBAC permissions are evaluated with workspace and item permissions, see: How OneLake RBAC permissions are evaluated with Fabric permissions

    Configure granular warehouse permissions

    Granular permissions can be applied to warehouses using the SQL analytics endpoint, similar to the way the endpoint is used for the lakehouse. The same permissions can be applied: GRANT, REVOKE, and DENY and row-level security, column-level security, and dynamic data masking.

    Screenshot of warehouse granular permissions.

    Configure Semantic model permissions

    A user’s role in a workspace implicitly grants them permission on the semantic models in a workspace. Semantic models allow for security to be defined using DAX. More granular permission can be applied using row-level security (RLS). To learn more about the managing RLS or permissions on the semantic model see:

    https://lernix.com.my/dell-emc-training-courses-malaysia

  • Configure workspace and item permissions

    Workspaces are environments where users can collaborate to create groups of items. Items are the resources you can work with in Fabric such as lakehouses, warehouses, and reports. Workspace roles are preconfigured sets of permissions that let you manage what users can do and access in a Fabric workspace.

    Item permissions control access to individual Fabric items within a workspace. Item permissions let you either adjust the permissions set by a workspace role or give a user access to one or more items within a workspace without adding the user to a workspace role.

    Let’s consider some scenarios where you would need to configure data access using workspace roles and item permissions.

    Understand workspace roles

    Suppose you work at a health care company as the Fabric security admin. You need to set up access for a new data engineer. The data engineer needs the ability to:

    • Create Fabric items in an existing workspace
    • Read all data in an existing lakehouse that’s in the same workspace where they can create Fabric items

    Workspace roles control what users can do and access within a Fabric workspace. There are four workspace roles and they apply to all items within a workspace. Workspace roles can be assigned to individuals, security groups, Microsoft 365 groups, and distribution lists. Users can be assigned to the following roles:

    • Admin – Can view, modify, share, and manage all content and data in the workspace, and manage permissions.
    • Member – Can view, modify, and share all content and data in the workspace.
    • Contributor – Can view and modify all content and data in the workspace.
    • Viewer – Can view all content and data in the workspace, but can’t modify it.

     Tip

    For a full list of the permissions associated with workspace roles, see: Roles in workspaces

    To meet the access requirements for the new data engineer, you can assign them the workspace Contributor role. This gives them access to modify content in the workspace, including creating Fabric items like lakehouses. The contributor role would also allow them to read data in the existing lakehouse.

    Assign workspace roles

    Users can be added to workspace roles from the Manage access button from within a workspace. Add a user by entering the user’s name and selecting the workspace role to assign them in the Add people dialogue.

    Screenshot of clicking the manage access button.

    Configure item permissions

    Item permissions control access to individual Fabric items within a workspace. Item permission can be used to give a user access to one or more items within a workspace without adding the user to a workspace role or can be used with workspace roles.

    Suppose that after a few months of having Contributor access on a workspace, a data engineer no longer needs to create Fabric items and now only needs to view a single lakehouse and read data in it.

    Since the engineer no longer needs to view all items in the workspace, the Contributor workspace role can be removed and item permissions on the lakehouse can be configured so the engineer will only be able to see the lakehouse metadata and data and nothing else in the workspace. This item access configuration helps you adhere to the principle of least privilege, where the engineer only has access to what’s needed to perform their job duties.

    An item can be shared and item permissions can be configured by selecting on the ellipsis (…) next to a Fabric item in a workspace and then selecting Manage permissions.

    Screenshot of configuring item permissions.

    In the Grant people access window that appears after selecting Manage permissions, if you add the user and don’t select any of the checkboxes under Additional permissions, the user will have read access to the lakehouse metadata. The user won’t have access to the underlying data in the lakehouse. To grant the engineer the ability to read data and not just metadata, Read all SQL endpoint data or Read all Apache Spark can be selected.

    Screenshot of grant people lakehouse read all access.

    https://lernix.com.my/hadoop-training-courses-malaysia

  • Understand the Fabric security model

    Data access in organizations is often restricted by users’ responsibilities, and roles and by an organization’s Fabric deployment patterns, and data architecture. Fabric has a flexible, multi-layer security model that allows you to configure security to accommodate different data access requirements. Having the ability to control permissions at different layers means you can adhere to the principle of least privilege, restricting user permissions to only what’s needed to perform job tasks.

    Fabric has three security levels and they’re evaluated sequentially to determine whether a user has data access. The order of evaluation for access is:

    1. Microsoft Entra ID authentication: checks if the user can authenticate to the Azure identity and access management service, Microsoft Entra ID.
    2. Fabric access: checks if the user can access Fabric.
    3. Data security: checks if the user can perform the action they’ve requested on a table or file.

    The third level, data security, has several building blocks that can be configured individually or together to align with different access requirements. The primary access controls in Fabric are:

    • Workspace roles
    • Item permissions
    • Compute or granular permissions
    • OneLake data access controls (preview)

    It’s helpful to envision these building blocks in a hierarchy to understand how access controls can be applied individually or together.

    Screenshot of Fabric access control hierarchy.

    workspace in Fabric enables you to distribute ownership and access policies using workspace roles. Within a workspace, you can create Fabric data items like lakehouses, data warehouses, and semantic models. Item permissions can be inherited from a workspace role or set individually by sharing an item. When workspace roles provide too much access, items can be shared using item permissions to ensure proper security.

    Within each data item, granular engine permissions such as Read, ReadData, or ReadAll can be applied.

    Compute or granular permissions can be applied within a specific compute engine in Fabric, like the SQL Endpoint or semantic model.

    Fabric data items store their data in OneLake. Access to data in the lakehouse can be restricted to specific files or folders using the role-based-access control (RBAC) feature called OneLake data access controls (preview).

    https://lernix.com.my/citrix-training-courses-malaysia

  • Take action with Microsoft Fabric Activator

    When monitoring surfaces changing data, anomalies, or critical events, alerts are generated or actions are triggered. Real-time data analytics is commonly based on the ingestion and processing of a data stream that consists of a perpetual series of data, typically related to specific point-in-time events. For example, a stream of data from an environmental IoT weather sensor. Real-Time Intelligence in Fabric contains a tool called Activator that can be used to trigger actions on streaming data. For example, a stream of data from an environmental IoT weather sensor might be used to trigger emails to sailors when wind thresholds are met. When certain conditions or logic is met, an action is taken, like alerting users, executing Fabric job items like a pipeline, or kicking off Power Automate workflows. The logic can be either a defined threshold, a pattern like events happening repeatedly over a time period, or the results of logic defined by a Kusto Query Language (KQL) query.

    What is Activator

    Activator is a technology in Microsoft Fabric that enables automated processing of events that trigger actions. For example, you can use Activator to notify you by email when a value in an eventstream deviates from a specific range or to run a notebook to perform some Spark-based data processing logic when a real-time dashboard is updated.

    Screenshot of an Activator alert in Microsoft Fabric.

    Understand Activator key concepts

    Activator operates based on four core concepts: Events, *Objects, Properties, and Rules.

    • Events - Each record in a stream of data represents an event that has occurred at a specific point in time.
    • Objects - The data in an event record can be used to represent an object, such as a sales order, a sensor, or some other business entity.
    • Properties – The fields in the event data can be mapped to properties of the business object, representing some aspect of its state. For example, a total_amount field might represent a sales order total, or a temperature field might represent the temperature measured by an environmental sensor.
    • Rules – The key to using Activator to automate actions based on events is to define rules that set conditions under which an action is triggered based on the property values of objects referenced in events. For example, you might define a rule that sends an email to a maintenance manager if the temperature measured by a sensor exceeds a specific threshold.

    Use cases for Activator

    Activator can help you in various scenarios, such as dynamic inventory management, real-time customer engagement, and effective resource allocation in cloud environments. It’s a potent tool for any circumstance that requires real-time data analysis and actions.

    Use Activator to:

    • Initiate marketing actions when product sales drop.
    • Send notifications when temperature changes could affect perishable goods.
    • Flag real-time issues affecting the user experience on apps and websites.
    • Trigger alerts when a shipment hasn’t been updated within an expected time frame.
    • Send alerts when a customer’s account balance crosses a certain threshold.
    • Respond to anomalies or failures in data processing workflows immediately.
    • Run ads when same-store sales decline.
    • Alert store managers to move food from failing grocery store freezers before it spoils.

    https://lernix.com.my/citrix-certification-malaysia

  • Use Microsoft Fabric Monitor Hub

    Visualization tools make monitoring easier. They help you identify trends or anomalies. Monitor hub is the monitoring visualization tool in Microsoft Fabric. Monitor hub collects and aggregates data from selected Fabric items and processes. It stores Fabric activity data in a common interface so you can view the status of multiple different data integration, transformation, movement, and analysis activities in Fabric in one place, rather than monitor each separately.

    Activities displayed in the Monitor hub

    Some of the activities you can see monitoring metadata for in the Microsoft Fabric Monitor hub include:

    • Data pipeline execution history
    • Dataflow executions
    • Datamart and semantic model refreshes
    • Spark job and notebook execution history and job details

    View the Monitor Hub

    The Monitor hub can be opened by selecting Monitor from the Fabric navigation pane.

    Screenshot of the Microsoft Fabric Monitor hub interface.

    View Fabric activity detail

    Each activity in Monitor hub can be selected and several actions can be performed for the selected activity. Actions vary by activity and include options such as: opening the activity, retrying it, viewing activity details or historical runs. To view this information, select the ellipsis that appears when you hover over an activity.

    Screenshot of the Microsoft Fabric Monitor hub details interface.

    When you select View detail, the screen that appears is customized for the activity you select and provides clarity about what happened during the activity. You can view metadata such as:

    • Activity status
    • Start and end time
    • Duration

    https://lernix.com.my/cisco-certification-training-courses-malaysia

  • Understand monitoring

    Monitoring is the process of collecting system data and metrics that determine if a system is healthy and operating as expected. Monitoring exposes errors that occurred and when they happened. To investigate issues and remediate errors, historical data is analyzed to get a picture of the health of a system or process.

    Monitoring Fabric activities

    In Fabric, you schedule activities and jobs that perform tasks like data movement, and transformation. Activities have dependencies on one another. You need to make sure that data arrives in its expected location on time and that system errors or delays don’t affect users or downstream activities. End-to-end processes need to be managed to ensure they’re reliable, performant, and resilient. One aspect of this monitoring is identifying and handling long-running operations and errors effectively. By doing this, you can minimize downtime and quickly address any underlying issues.

    The following activities in Fabric allow you to perform tasks that deliver data to users. These activities should be monitored:

    • Data pipeline activity – A data pipeline is a group of activities that together perform a data ingestion task. Pipelines allow you to manage, extract, transform, and load (ETL) activities together instead of individually. Monitor the success or failure of jobs and pipeline activities. Look for errors if the pipeline failed. View job history to compare current activity performance to past job execution performance to gain insight into when errors were first introduced into a process.
    • Dataflows – A dataflow is a tool for ingesting, loading, and transforming data using a low-code interface. Dataflows can be run manually or scheduled or run as part of pipeline orchestration. Monitor start and end times, status, duration, and table load activities. To investigate issues, drill down into activities and view information about errors.
    • Semantic model refreshes – A semantic model is a visual representation of a data model that’s ready for reporting and visualization. It contains transformations, calculations, and data relationships. Changes to the data model require the semantic model to be refreshed. Semantic models can be refreshed from data pipelines using the semantic model refresh activity. Monitor for refresh retries to help identify transient issues, before classifying an issue as a failure.
    • Spark jobs, notebooks and lakehouses – Notebooks are an interface for developing Apache Spark jobs. Data can be loaded, or transformed for lakehouses using Spark and notebooks. Monitor Spark job progress, task execution, resource usage, and review Spark logs.
    • Microsoft Fabric Eventstreams – Events are observations about the state of an object, like a timestamp for weather sensors. Eventstreams in Fabric are set up to run perpetually to ingest real-time or streaming events into Fabric and transform them for analytics needs, and then route them to various destinations. Monitor streaming event data, ingestion status, and ingestion performance.

    Monitoring best practices

    Continuously monitor the data ingestion, transformation, and load processes to ensure they’re running smoothly. Monitoring best practices include:

    • Identifying what to monitor and tracking metrics.
    • Collecting and analyzing data on a regular basis to identify normal behavior so you can spot anomalies when they occur.
    • Reviewing logs and metrics regularly to identify and establish parameters for normal system behavior.
    • Taking action to resolve problems when metrics and logs show deviations from normal behavior.
    • Optimizing performance by using monitoring data to identify bottlenecks or performance issues.

    https://lernix.com.my/ccnp-certification-training-courses-malaysia

  • Evaluate responses

    When educators write prompts for information, generative AI models don’t “know” an answer. Instead, it predicts the most likely response based on its training data. Regardless of the quality of your prompt, the model might generate an incorrect or fabricated response.

    Misinformation can be spread through these fabrications. Copilot Chat aims to base all its responses on reliable sources—but AI-generated responses might be incorrect, and non-Microsoft content on the internet might not always be accurate or reliable. Copilot Chat might sometimes misrepresent the information it finds, and you might see responses that sound convincing but are incomplete, inaccurate, or inappropriate.

    While Copilot Chat works to avoid sharing unexpected offensive content in search results and takes steps to prevent its chat features from engaging on potentially harmful topics, educators might still get unexpected results. Provide feedback or report concerns directly to Microsoft by using the feedback features beneath the response.

    When Copilot Chat provides a response to a prompt, it also provides two key pieces of information: the search terms used to generate the response and the links to content sources. Educators can use these details to inform their evaluation of the response. If the prompt terms don’t represent the intended question, start a new prompt with different wording. If the source links aren’t reliable, ask Copilot Chat to refine the response using specific, more reliable websites that you provide.

    https://lernix.com.my/ccie-certification-training-courses-malaysia

  • Work with images

    Based on Bing data, images are one of the most searched categories—second only to general web searches. Historically, search was limited to images that already existed on the web. Microsoft Designer and Visual Search help educators incorporate visual tools into their search and build understanding of new concepts.

    There are two ways to work with images in Copilot Chat. The first way is to ask Copilot Chat to create a new image. The second way is to use Visual Search, which includes adding an image to the prompt.

    Image creation by Microsoft Designer is powered by an advanced version of the DALL·E model from our partners at OpenAI. It creates an image simply by using words to describe the desired picture in over 100 languages.

    Ask Copilot Chat to create a brand-new image with a prompt that begins with “create an image of” or “draw an image of.” Then, finish the prompt with an exact description. Image creation works best with lots of description, so add details like adjectiveslocations, and artistic styles to help guide the output. Also consider using point of view and lighting direction.

    Educators can use Copilot Chat to make images for class presentations, newsletters, quizzes, avatars, assignments, and more.

     Tip

    Try one of the following sample prompts in Copilot Chat or write one based on your needs or interests. Visit copilot.cloud.microsoft to begin, then add your prompt to a new topic.

    Create an image of a blue panda bear wearing sunglasses on the beach in digital art format.

    Draw a 3D typography letter B on a green background with shiny chrome texture in a minimalist style.

    Create a cartoon image of an apartment decorated in primary colors with a television, a couch, and a plant in one corner.

    Draw an image of a sunset over the Roman coliseum in a realistic style to use for my course syllabus header.

    When prompting Copilot Chat to generate an image, Microsoft Designer is invoked to create a graphic that matches the prompt description. Educators can use these images in class newsletters, presentations, lessons, and more. Microsoft Designer uses the latest DALL·E 3 model from OpenAI, which delivers a huge leap forward with more beautiful creations and better renderings than DALL·E 2.

    Model digital citizenship for learners by acknowledging the images were created with AI and include the prompt as a teachable moment. Microsoft doesn’t claim ownership of the images created by Microsoft Designer.

    Each image created is original, so images created by the sample prompts might be different with each chat. Regenerate another set of images if the first set isn’t ideal for your purpose.

    These are sample images generated by the previous prompts.

    Screenshots of examples of images generated by Designer inside Copilot.

    With Visual Search in Copilot Chat, educators can input images and ask questions about them. Ask questions about images that are difficult to describe. For example: learn about a landmark you haven’t seen, identify a plant or animal you don’t recognize, and more.

    To use Visual Search in Copilot Chat:

    1. Select the Add an image icon in the text box in Copilot Chat.Screenshot showing the Visual Search icon in Microsoft Copilot.
    2. In the Microsoft 365 Copilot app for mobile you can upload an image file or take a photo to add an image.Screenshot showing the Visual Search image selection options in Microsoft Copilot.
    3. Ask a question related to the image.

    Copilot Chat first analyzes the photo to blur faces for privacy, then interprets the image, searches for information about the image, and even provides additional details like a map or link to learn more.

    Sample prompt

    1. Find a photo of an iconic location around the world or type of animal.
    2. Upload the photo to Copilot Chat’s Visual Search and add a prompt like one of the following examples:Explain to me where this statue is located. Include a map to the destination.Identify this animal. Give additional information about the animal’s habitat, food sources, and lifespan. Organize this information into a list.

    Sample response

    Following is a sample response for the first prompt.

    Screenshot of sample prompt and response number 5. Select the following link for the accessible PDF version.

    Copilot sample prompt 5-Identify statue

    Additional protections for images

    Microsoft’s development of AI is guided by its Responsible AI principles to help deploy AI systems responsibly. To curb the potential misuse of image creation tools like Microsoft Designer, Microsoft works together with DALL·E’s developer OpenAI to deliver an experience that encourages responsible use with additional protections. For example, there are controls in place that aim to limit the generation of harmful or unsafe images. When Copilot Chat detects a potentially harmful image that could be generated by a prompt, it blocks the prompt and warns the user. Microsoft also makes it clear that Microsoft Designer’s images are generated by AI.

    When you upload an image, Copilot Chat uses facial blurring and other safety mechanisms before sending the image to the AI model for processing. Facial blurring protects the privacy of people in the image. The face blurring technology relies on context clues to determine where to blur and attempts to blur all faces.

    https://lernix.com.my/microsoft-365-certification-training-courses-malaysia