Swashbuckle is a NuGet package that provides a way to automatically generate Swagger documentation for ASP.NET Web API projects. Swagger is a tool that helps developers design, build, document, and consume RESTful APIs. With Swashbuckle, you can easily add Swagger documentation to your Web API project by annotating your code with attributes that describe your API endpoints, parameters, and responses. Swashbuckle then uses this information to generate a Swagger JSON file, which can be used to generate interactive API documentation, client SDKs, and more.
There are three main components to Swashbuckle:
Swashbuckle.AspNetCore.Swagger: a Swagger object model and middleware to expose SwaggerDocument objects as JSON endpoints.
Swashbuckle.AspNetCore.SwaggerGen: a Swagger generator that builds SwaggerDocument objects directly from your routes, controllers, and models. It’s typically combined with the Swagger endpoint middleware to automatically expose Swagger JSON.
Swashbuckle.AspNetCore.SwaggerUI: an embedded version of the Swagger UI tool. It interprets Swagger JSON to build a rich, customizable experience for describing the web API functionality. It includes built-in test harnesses for the public methods.
In this unit, you learn how to use the IHttpClientFactory to handle the HTTP client creation and disposal, and to use that client to perform REST operations in an ASP.NET Blazor Web app. The code samples used throughout this unit are based on interacting with an API that enables managing a list of fruit stored in a database. The information in this unit is based on using code-behind files in a Razor app.
The following code represents the data model that is referenced in the code examples:
C#Copy
public class FruitModel
{
// An id assigned by the database
public int id { get; set; }
// The name of the fruit
public string? name { get; set; }
// A boolean to indicate if the fruit is in stock
public bool instock { get; set; }
}
Register IHttpClientFactory in your app
To add IHttpClientFactory to your app, register AddHttpClient in the Program.cs file. The following code example uses the named client type and sets the base address of the API used in REST operations, and is referenced throughout the rest of this unit.
C#Copy
// Add services to the container.
builder.Services.AddRazorComponents()
.AddInteractiveServerComponents();
// Add IHttpClientFactory to the container and set the name of the factory
// to "FruitAPI". The base address for API requests is also set.
builder.Services.AddHttpClient("FruitAPI", httpClient =>
{
httpClient.BaseAddress = new Uri("http://localhost:5050/");
});
var app = builder.Build();
Identify the operation requirements in the API
Before performing operations with an API, you need to identify what the API is expecting:
API endpoint: Identify the endpoint for the operation so you can properly adjust the URI stored in the base address if needed.
Data requirements: Identify if the operation is returning/expecting an enumerable or just a single piece of data.
Note
The code samples throughout the rest of this unit assume each HTTP operation is handled on a separate page in the solution.
Perform a GET operation
A GET operation shouldn’t send a body and is used (as the method name indicates) to retrieve data from a resource. To perform an HTTP GET operation, given an HttpClient and a URI, use the HttpClient.GetAsync method. For example, if you wanted to create a table on a Razor Page app’s home page (Home.razor) to display the results of a GET operation you need to add the following to the code-behind (Home.razor.cs):
Use dependency injection to add the IHttpClientFactory to the page model.
Create an instance of the HttpClient
Perform the GEToperation and deserialize the results into your data model.
The following code example shows how to perform a GET operation. Be sure to read the comments in the code.
C#Copy
public partial class Home : ComponentBase
{
// IHttpClientFactory set using dependency injection
[Inject]
public required IHttpClientFactory HttpClientFactory { get; set; }
[Inject]
private NavigationManager? NavigationManager { get; set; }
/* Add the data model, an array is expected as a response */
private IEnumerable<FruitModel>? _fruitList;
// Begin GET operation when the component is initialized
protected override async Task OnInitializedAsync()
{
// Create the HTTP client using the FruitAPI named factory
var httpClient = HttpClientFactory.CreateClient("FruitAPI");
// Perform the GET request and store the response. The parameter
// in GetAsync specifies the endpoint in the API
using HttpResponseMessage response = await httpClient.GetAsync("/fruits");
// If the request is successful deserialize the results into the data model
if (response.IsSuccessStatusCode)
{
using var contentStream = await response.Content.ReadAsStreamAsync();
_fruitList = await JsonSerializer.DeserializeAsync<IEnumerable<FruitModel>>(contentStream);
}
else
{
// If the request is unsuccessful, log the error message
Console.WriteLine($"Failed to load fruit list. Status code: {response.StatusCode}");
}
}
}
Perform a POST operation
A POST operation should send a body and is used to add data to a resource. To perform an HTTP POST operation, given an HttpClient and a URI, use the HttpClient.PostAsync method. If you want to use a form to add items to the data on your home page you need to:
Use dependency injection to add the IHttpClientFactory to the page model.
Bind the data to the form using either the EditForm or EditContext model.
Serialize the data you want to add using the JsonSerializer.Serialize method.
The Hypertext Transfer Protocol (or HTTP) is used to request resources from a web server. Many types of resources are available on the web, and HTTP defines a set of request methods for accessing these resources. In .NET Core, those requests are made through an instance of the HttpClient.
There are two options for implementing HttpClient in your app and the recommendation is to choose the implementation based on the clients lifetime management needs:
Long-lived clients: create a static or singleton instance using the HttpClient class and set PooledConnectionLifetime
Short-lived clients: use clients created by IHttpClientFactory
Implement with the HttpClient class
The System.Net.Http.HttpClient class sends HTTP requests and receives HTTP responses from a resource identified by a URI. An HttpClient instance is a collection of settings applied to all requests executed by that instance, and each instance uses its own connection pool, which isolates its requests from others. Beginning with .NET Core 2.1, the SocketsHttpHandler class provides the implementation, making behavior consistent across all platforms.
HttpClient only resolves DNS entries when a connection is created. It doesn’t track time to live (TTL) durations specified by the DNS server. If DNS entries change regularly the client is unaware those updates. To solve this issue, you can limit the lifetime of the connection by setting the PooledConnectionLifetime property, so that DNS lookup is repeated when the connection is replaced.
In the following example, HttpClient is configured to reuse connections for 15 minutes. After the TimeSpan specified by PooledConnectionLifetime elapses, the connection is closed and a new one is created.
C#Copy
var handler = new SocketsHttpHandler
{
PooledConnectionLifetime = TimeSpan.FromMinutes(15) // Recreate every 15 minutes
};
var sharedClient = new HttpClient(handler);
Implement with IHttpClientFactory
The IHttpClientFactory serves as a factory abstraction that can create HttpClient instances with custom configurations. IHttpClientFactory was introduced in .NET Core 2.1. Common HTTP-based .NET workloads can take advantage of middleware with ease.
When you call any of the AddHttpClient extension methods, you’re adding the IHttpClientFactory and related services to the IServiceCollection. The IHttpClientFactory type offers the following benefits:
Exposes the HttpClient class as a dependency injection-ready type.
Provides a central location for naming and configuring logical HttpClient instances.
Codifies the concept of outgoing middleware via delegating handlers in HttpClient.
Provides extension methods for Polly based middleware to take advantage of delegating handlers in HttpClient.
Manages the caching and lifetime of underlying HttpClientHandler instances. Automatic management avoids common Domain Name System (DNS) problems that occur when manually managing HttpClient lifetimes.
Adds a configurable logging experience for all requests sent through clients created by the factory.
You should let HttpClientFactory and the framework manage the lifetimes and instantiation of HttpClient instances. The lifetime management helps avoid common issues such as DNS (Domain Name System) problems that can occur when manually managing HttpClient lifetimes.
There are several ways IHttpClientFactory can be used in an app:
When you register a service, you must choose a lifetime that matches how the service is used in the app. The lifetime affects how the service behaves when it’s injected into components. So far, you’ve registered services using the AddSingleton method. This method registers a service with a singleton lifetime. There are three built-in lifetimes for services in ASP.NET Core:
Singleton
Scoped
Transient
Singleton lifetime
Services registered with a singleton lifetime are created once when the app starts and are reused for the lifetime of the app. This lifetime is useful for services that are expensive to create or that don’t change often. For example, a service that reads configuration settings from a file can be registered as a singleton.
Use the AddSingleton method to add a singleton service to the service container.
Scoped lifetime
Services registered with a scoped lifetime are created once per configured scope, which ASP.NET Core sets up for each request. A scoped service in ASP.NET Core is typically created when a request is received and disposed of when the request is completed. This lifetime is useful for services that access request-specific data. For example, a service that fetches a customer’s data from a database can be registered as a scoped service.
Use the AddScoped method to add a scoped service to the service container.
Transient lifetime
Services registered with a transient lifetime are created each time they’re requested. This lifetime is useful for lightweight, stateless services. For example, a service that performs a specialized calculation can be registered as a transient service.
Use the AddTransient method to add a transient service to the service container.
Services that depend on other services
A service can depend on other services, typically by having its dependencies injected through its constructor. When you register a service that depends on another service, you must take service lifetime into account. For example, a singleton services shouldn’t depend on a scoped service because the scoped service is disposed of when the request is completed but a singleton lives for the lifetime of the app. Fortunately, ASP.NET Core will by default check for this misconfiguration and will report a scope validation error when the app starts up so the issue can be quickly identified and addressed.
ASP.NET Core apps often need to access the same services across multiple components. For example, several components might need to access a service that fetches data from a database. ASP.NET Core uses a built-in dependency injection (DI) container to manage the services that an app uses.
Dependency injection and Inversion of Control (IoC)
The dependency injection pattern is a form of Inversion of Control (IoC). In the dependency injection pattern, a component receives its dependencies from external sources rather than creating them itself. This pattern decouples the code from the dependency, which makes code easier to test and maintain.
Consider the following Program.cs file:
C#Copy
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.DependencyInjection;
using MyApp.Services;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<PersonService>();
var app = builder.Build();
app.MapGet("/",
(PersonService personService) =>
{
return $"Hello, {personService.GetPersonName()}!";
}
);
app.Run();
And the following PersonService.cs file:
C#Copy
namespace MyApp.Services;
public class PersonService
{
public string GetPersonName()
{
return "John Doe";
}
}
To understand the code, start with the highlighted app.MapGet code. This code maps HTTP GET requests for the root URL (/) to a delegate that returns a greeting message. The delegate’s signature defines an PersonService parameter named personService. When the app runs and a client requests the root URL, the code inside the delegate depends on the PersonService service to get some text to include in the greeting message.
Where does the delegate get the PersonService service? It’s implicitly provided by the service container. The highlighted builder.Services.AddSingleton<PersonService>() line tells the service container to create a new instance of the PersonService class when the app starts, and to provide that instance to any component that needs it.
Any component that needs the PersonService service can declare a parameter of type PersonService in its delegate signature. The service container will automatically provide an instance of the PersonService class when the component is created. The delegate doesn’t create the PersonService instance itself, it just uses the instance that the service container provides.
Interfaces and dependency injection
To avoid dependencies on a specific service implementation, you can instead.
The Azure infrastructure used to run Azure Web Apps in Windows isn’t the same as for Linux apps, and log files aren’t stored in the same locations.
Windows app log files
For Windows apps, file system log files are stored in a virtual drive that is associated with your Web App. This drive is addressable as D:\Home, and includes a LogFiles folder; within this folder are one or more subfolders:
Application – Contains application-generated messages, if File System application logging is enabled.
DetailedErrors – Contains detailed Web server error logs, if Detailed error messages are enabled.
http – Contains IIS-level logs, if Web server logging is enabled.
W3SVC<number> – Contains details of all failed http requests, if Failed request tracing is enabled.
Where storage to a Blob container is enabled, logs are stored in year, month, date, and hour folders, for example:Copy
2019
01
10
08 - log entries for the period 08:00:00 to 08:59:59 on January 10th 2019
09 - log entries for the period 09:00:00 to 09:59:59 on January 10th 2019
Within the hour folder, there are one or more CSV files containing messages saved within that 60-minute period.
Linux app log files
For Linux Web Apps, the Azure tools currently support fewer logging options than for Windows apps. Redirections to STDERR and STDOUT are managed through the underlying Docker container that runs the app, and these messages are stored in Docker log files. To see messages logged by underlying processes, such as Apache, you need to open an SSH connection to the Docker container.
Methods for retrieving log files
How you retrieve log files depends on the type of log file, and on your preferred environment. For file system logs, you can use the Azure CLI or the Kudu console. Kudu is the engine behind many features in Azure App Service related to source control based deployment.
Azure CLI
To download file system log files using the Azure CLI, first copy the log files from the app’s file system to Cloud Shell storage, and then run the following command.
Azure CLICopy
az webapp log download --log-file \<_filename_\>.zip --resource-group \<_resource group name_\> --name \<_app name_\>
To download the zipped log files to your local computer, use the file download and upload tool in the Cloud Shell toolbar. Once downloaded, the files are ready for opening in Microsoft Excel, or other apps.
Note
The Azure CLI download includes all app logs, except for failed request traces.
Kudu
There’s an associated Source Control Management (SCM) service site associated with all Azure Web Apps. This site runs the Kudu service, and other Site Extensions. It’s Kudu that manages deployment and troubleshooting for Azure Web Apps, including options for viewing and downloading log files. The specific functionality available in Kudu, and how you download logs, depends on the type of Web App. For Windows apps, you can browse to the log file location, and then download the logs. For Linux apps, there might be a download link.
One way to access the Kudu console is to navigate to https://<app name>.scm.azurewebsites.net, and then sign in using deployment credentials.
You can also access Kudu from the Azure portal. On the App Service menu, under Development Tools, select Advanced Tools, and then on the Advanced Tools pane, select Go to open a new Kudu Services tab.
To download the log files from Windows apps:
Select Debug Console, and then select CMD.
In the file explorer section, select LogFiles, and for the Application folder, select Download. The logs are downloaded to your computer as Application.zip.For Linux apps, select the download link on the Environment page.
Azure Storage browser
To access Windows logs saved to an Azure Blob Storage container, you can use the Azure portal. To view and download the contents of the log file container, select Storage accounts from the portal menu. Select your storage account and then select Storage browser. Open the type of storage container (for example, Blob containers), and select the name of the blob container that contains the log file. Inside the container, open the relevant year, month, date, and hour folder, then double-click a CSV file to download it to your computer.
If you have Microsoft Excel on your computer, the log file automatically opens as an Excel worksheet. Otherwise, you can open the file using a text editor, such as Notepad.
Live log streaming is an easy and efficient way to view live logs for troubleshooting purposes. Live log streaming provides a quick view of all the messages sent to the app logs in the file system, without having to go through the process of locating and opening the logs. To use live logging, you connect to the live log service from the command line, and you can then see text being written to the app’s logs in real time.
What logs can be streamed?
The log streaming service adds a redirect from the file system logs, so that you see the same information that is saved to the log files. So, if you enable verbose logging for ASP.NET Windows apps, for example, the live log stream shows all your logged messages.
Typical scenarios for using live logging
Live logging is a useful tool for initial debugging. Real time log messages give you immediate feedback for code or server issues. You can then make a change, redeploy your app, and instantly see the results.
The live log stream connects to a single app instance, so it’s not useful if you have a multi-instance app. Live logging is also of limited use as you scale up your apps. In these scenarios, it’s better to ensure that messages are saved to log files that can be opened and studied offline.
How to use live log streaming
You can enable live log streaming from the command line, in a Cloud Shell session directly from the Azure portal. There are two options: Azure CLI or curl commands.
Azure CLI
To open the log stream, run the following command.
azcliCopy
az webapp log tail --name <app name> --resource-group <resource group name>
To stop viewing live logs, press Ctrl+C.
Curl
To use Curl, you need FTPS credentials. There are two types of FTPS credentials:
Application scope. Azure automatically creates a username/password pair when you deploy a Web app, and each of your apps has their own separate set of credentials.
User scope. You can create your own credentials for use with any Web app. You can manage these credentials in the Azure portal, as long as you already have at least one Web app, or by using Azure CLI commands.
Azure portal UI
To view and copy these details from the Azure portal, in the App Service menu, under Deployment, select Deployment Center, and then select the FTPS credentials tab.
Reset user-level credentials
To create a new set of user-level credentials, run the following command in the Cloud Shell.
azcliCopy
az webapp deployment user set --user-name <name-of-user-to create> --password <new-password>
Note
Usernames must be globally unique across all of Azure, not just within your own subscription or directory.
After you create a set of credentials, run the following command to open the log stream. You’re then prompted for the password.
Azure provides built-in diagnostics with app logging. App logs are the output of runtime trace statements in app code. For example, you might want to check some logic in your code by adding a trace to show when a particular function is being processed. Or, you might only want to see a logged message when a particular level of error occurs. App logging is primarily for apps in preproduction and for troublesome issues, because excessive logs can carry a performance hit and quickly consume storage. For this reason, logging to the file system is automatically disabled after 12 hours.
App logging has scale limitations, primarily because files are being used to save the logged output. If you have multiple instances of an app, and the same storage is shared across all instances, messages from different instances might be interleaved, making troubleshooting difficult. If each instance has its own log file, then there are multiple logs, again making it difficult to troubleshoot instance-specific issues.
The types of logging available through the Azure App Service depends on the code framework of the app, and on whether the app is running on a Windows or Linux app host.
ASP.NET
ASP.NET apps only run on Windows app services. To log information to the app diagnostics log, use the System.Diagnostics.Trace class. There are four trace levels you can use, that correlate with the error, warning, information, and verbose logging levels shown in the Azure portal:
Trace.TraceError(“Message”); // Writes an error message
Trace.TraceWarning(“Message”); // Writes a warning message
Trace.TraceInformation(“Message”); // Writes an information message
Trace.WriteLine(“Message”); // Writes a verbose message
ASP.NET Core apps
ASP.NET Core apps can run on either Windows or Linux. To log information to Azure app logs, use the logger factory class, and then use one of six-log levels:
logger.LogCritical(“Message”); // Writes a critical message at log level 5
logger.LogError(“Message”); // Writes an error message at log level 4
logger.LogWarning(“Message”); // Writes a warning message at log level 3
logger.LogInformation(“Message”); // Writes an information message at log level 2
logger.LogDebug(“Message”); // Writes a debug message at log level 1
logger.LogTrace(“Message”); // Writes a detailed trace message at log level 0
For ASP.NET Core apps on Windows, these messages relate to the filters in the Azure portal in this way:
Levels 4 and 5 are error messages.
Level 3 is a warning message.
Level 2 is an information message.
Levels 0 and 1 are verbose messages.
For ASP.NET Core apps on Linux, only error messages (levels 4 and 5) are logged.
Node.js apps
For script-based Web apps, such as Node.js apps on Windows or Linux, app logging is enabled using the console() method:
console.error(“Message”); // Writes a message to STDERR.
console.log(“Message”); // Writes a message to STDOUT.
Both types of message are written to the Azure app service error level logs.
Logging differences between Windows and Linux hosts
To route messages to log files, Azure Web apps use the Internet Information Services (IIS) Web server. Because Windows-based Web apps are a well-established Azure service, and messaging for ASP.NET apps is tightly integrated with the underlying IIS service, Windows apps benefit from a rich logging infrastructure. For other apps, logging options are limited by the development platform, even when running on a Windows app service.
The Docker image used for the app’s container, determines the logging functionality available to Linux-based scripted apps, such as Node. Basic logging, such as using redirections to STDERR or STDOUT, uses the Docker logs. Richer logging functionality is dependent on the underlying image, and whether it’s running PHP, Perl, Ruby, and so on. To download equivalent Web application logging as provided by IIS for Windows apps, might require connecting to your container using SSH.
The following table summarizes the logging support for common app environments and hosts.
Scaling out enables you to run more instances of a web app. The pricing tier determines resources available to each instance used by the App Service plan that hosts the web service. Each pricing tier specifies the computing power provided, together with the memory and maximum number of instances that can be created.
If you initially deploy a web app using a relatively cheap pricing tier, you might find the resources are sufficient to start with. But the resources might become too limited if demand for your web service grows, or if you add features that require more power. In this case, you can scale up to a more powerful pricing tier.
In the hotel reservation system, you notice a steady increase in the number of visitors, beyond the variations caused by special offers or events. Your company is adding more features to the web app that require more resources. You’re nearing the scale-out limits of your current App Service plan pricing tier, so you need to scale up to a tier that provides more instances and more powerful hardware.
In this unit, you learn how to scale up the web app to meet the increasing resource requirements.
App Service plan pricing tiers and hardware levels
The different pricing tiers available for App Service plans offer various levels or resources. The Basic, Standard, and Premium tiers are based on A-Series virtual machines that have different amounts of memory and IO capacity. The PremiumV2 and Isolated tiers are based on Dv2-Series virtual machines. Each of these tiers has three hardware levels, roughly corresponding to 1, 2, and 4 CPUs. For detailed information about the pricing tiers and hardware levels, see App Service pricing.
Scale up a web app
You scale an App Service plan up and down by changing the pricing tier and hardware level that it runs on. You can start with the Free tier and scale up as needed according to your requirements. This process is manual. You can also scale down again if you no longer need the resources associated with a particular tier.
Scaling up can cause an interruption in service to client apps running at the time. They might need to disconnect from the service and reconnect if the scale-up occurs during an active call to the web app. New connections might be rejected until scaling finishes. Also, scaling up can cause the outgoing IP addresses for the web app to change. If your web app depends on other services that have firewalls restricting incoming traffic, you need to reconfigure these services.
As with scale-out, you should monitor the performance of your system to ensure that scaling up or down has the desired effect. It’s also important to understand that scale up and scale out can work cooperatively together. If you scale out to the maximum number of instances available for your pricing tier, you must scale up before you can scale out further.
By manually scaling out and back in again, you can respond to expected increases and decreases in traffic. Scaling out has the extra benefit of increasing availability because of the increased number of instances of the web app. A failure of one instance doesn’t make the web app unavailable.
In the hotel reservation system, you can scale out before an anticipated seasonal influx. You can scale back in when the season is over and the number of booking requests is reduced.
In this unit, you learn how to manually scale out a web app and how to scale it back in.
App Service plans and scalability
A web app that runs in Azure typically uses Azure App Service to provide the hosting environment. App Service can arrange for multiple instances of the web app to run. It load balances incoming requests across these instances. Each instance runs on a virtual machine.
An App Service plan defines the resources available to each instance. The App Service plan specifies the operating system (Windows or Linux), the hardware (memory, CPU processing capacity, disk storage, and so on), and the availability of services like automatic backup and restore.
Azure provides a series of well-defined App Service plan tiers. This list summarizes each of these tiers, in increasing order of capacity and cost:
The Free tier provides 1 GB of disk space and support for up to 10 apps, but only a single shared instance and no SLA for availability. Each app has a compute quota of 60 minutes per day. The Free service plan is suitable for app development and testing rather than production deployments.
The Shared tier provides support for more apps (up to 100) also running on a single shared instance. Apps have a compute quota of 240 minutes per day. There’s no availability SLA.
The Basic tier supports an unlimited number of apps and provides more disk space. Apps can be scaled out to three dedicated instances. This tier provides an SLA of 99.95% availability. There are three levels in this tier that offer varying amounts of computing power, memory, and disk storage.
The Standard tier also supports an unlimited number of apps. This tier can scale to 10 dedicated instances and has an availability SLA of 99.95%. Like the Basic tier, this tier has three levels that offer an increasingly powerful set of computing, memory, and disk options.
The Premium tier gives you up to 20 dedicated instances, an availability SLA of 99.95%, and multiple levels of hardware.
The Isolated tier runs in a dedicated Azure virtual network, which gives you a network and computes isolation. This tier can scale out to 100 instances and has an availability SLA of 99.95%.