Quota: The quota of the file share for standard file shares; the provisioned size of the file share for premium file shares. Select Create to finishing creating the new share. Note that if your storage account is in a virtual network, you will not be able to successfully create an Azure file share unless your client is also in the virtual network. Nov 11, 2016 With Azure CLI on macOS you can do interesting things like provisioning new VMs or get a status overview on them. But Azure CLI is not PowerShell and so it lacks some features I really appreciate. Azure CLI after executing the azure vm list command. Nov 30, 2017 Azure Files offers fully managed file shares in the cloud, that are accessible via the industry standard Server Message Block (SMB) protocol (also known as Common Internet File System or CIFS). Azure File Shares can be mounted concurrently in the cloud or on-premises deployments of Windows, Linux, and macOS.
In this article, we will learn how to implement Azure serverless with Blazor web assembly. And to do that, we will create an app that lists out some Frequently Asked Questions (FAQ) on Covid-19.
Here's what we will cover:
The FAQs will be displayed in a card layout with the help of Bootstrap.
The Covid-19 FAQ app is deployed on Azure. See it in action at https://covid19-faq.azurewebsites.net/
In traditional applications such as a 3-tier app, a client requests resources from the server, and the server processes the request and responds with the appropriate data.
However, there are some issues with this architecture. We need a server running continuously. Even if there are no requests, the server is present 24X7, ready to process the request. Maintaining server availability is cost-intensive.
Another problem is scaling. If the traffic is huge, we need to scale out all the servers which can be a cumbersome process.
An effective solution to this problem is serverless web architecture. The client makes a request to a file storage account instead of a server. The storage account returns the index.html page along with some code that needs to be rendered on the browser.
Since there is no server to render the page, we are relying on the browser to render the page. All the logic to draw the element or update the element will run in the browser. We do not have any server on backend – we just have a storage account with a static asset.
This is a cost-effective solution as we do not have any server, just some static files in a storage account. It is also very easy to scale out for heavy load websites.
Making the browser run all the logic to render the page seems exciting, but it has some limitations.
We do not want the browser to make database calls. We need some part of our code to run on the server-side such as connecting to a database.
This is where Azure functions come in handy. In a serverless architecture, if we want some code to run the server-side, then we use an Azure function.
Azure functions are an event-driven serverless compute platform. You need to pay only when execution happens. They are also easy to scale. Hence, we get both the scaling and the cost benefits with Azure functions.
To learn more you can refer to the Azure function official docs.
An Azure serverless solution can add value to your product by minimizing the time and resources you spend on infrastructure-related requirements.
You can increase developer productivity, optimize resources and accelerate the time to market with the help of a fully managed, end-to-end Azure serverless solution.
To learn more, see the Azure serverless official doc.
Blazor is a .NET web framework for creating client-side applications using C#/Razor and HTML.
Blazor runs in the browser with the help of WebAssembly. It can simplify the process of creating a single page application (SPA). It also provides a full-stack web development experience using .NET.
Using .NET for developing Client-side application has multiple advantages:
Blazor supports a wide array of features to make web development easier for us. Some of the prominent features of Blazor are:
To learn more about Blazor, please refer to the official Blazor docs.
To get started with the application, we need to fulfill these prerequisites:
While installing the VS 2019, please make sure you select the Azure development and ASP.NET and web development workload.

You can get the source code from GitHub here.
Log in to the Azure portal and search for the “Azure Cosmos DB” in the search bar and click on the result. On the next screen, click on the Add button.
This will open a “Create Azure Cosmos DB Account” page. You need to fill in the required information to create your database. Refer to the image shown below:
You can fill in the details like this:
Keep the other fields to its default value and click on the “Review+ Create” button. In the next screen, review all your configurations and click on the “Create” button. After a few minutes, the Azure Cosmos DB account will be created. Click on “Go to resource” to navigate to the Azure Cosmos DB account page.
On the Azure Cosmos DB account page, click on “Data Explorer” on the left navigation, and then select “New Container”. Refer to the image shown below:
An “Add Container” pane will open. You need to fill in the details to create a new container for your Azure Cosmos DB. Refer to the image shown below:
You can fill in the details as indicated below.
Click on the “OK” button to create the database.
In the Data Explorer, expand the FAQDB database then expand the FAQContainer. Select Items, and then click on New Item on the top. An editor will open on the right side of the page. Refer to the image shown below:
Put the following JSON data in the editor and click on the Save button at the top.
We have added a set of questions and answer along with a unique id.
Follow the process described above to insert five more sets of data.
Click on “keys”on the left navigation, navigate to the “Read-write Keys” tab. The value under PRIMARY CONNECTION STRING is our required connection string. Refer to the image shown below:
Make a note of the PRIMARY CONNECTION STRING value. We will use it in the latter part of this article, when we will access the Azure Cosmos DB from an Azure function.
Open Visual Studio 2019, click on “Create a new project”. Search “Functions” in the search box. Select the Azure Functions template and click on Next. Refer to the image shown below:
In “Configure your new project” window, enter a Project name as FAQFunctionApp. Click on the Create button. Refer to the image below:
A new “Create a new Azure Function Application settings” window will open. Select “Azure Functions v3 (.NET Core)”from the dropdown at the top. Select the function template as “HTTP trigger”. Set the authorization level to “Anonymous” from the drop-down on the right. Click on the Create button to create the function project and HTTP trigger function.
Refer to the image shown below:
To enable the Azure function App to bind to the Azure Cosmos DB, we need to install the Microsoft.Azure.WebJobs.Extensions.CosmosDB package. Navigate to Tools >> NuGet Package Manager >> Package Manager Console and run the following command:
Refer to the image shown below.
You can learn more about this package at the NuGet gallery.
The Azure function project contains a default file called Function1.cs. You can safely delete this file as we won’t be using this for our project.
Right-click on the FAQFunctionApp project and select Add >> New Folder. Name the folder as Models. Again, right-click on the Models folder and select Add >> Class to add a new class file. Put the name of your class as FAQ.cs and click Add.
Open FAQ.cs and put the following code inside it.
The class has the same structure as the JSON data we have inserted in the cosmos DB.
Right-click on the FAQFunctionApp project and select Add >> Class. Name your class as CovidFAQ.cs. Put the following code inside it.
We have created a class CovidFAQ and added an Azure function to it. The attribute FunctionName is used to specify the name of the function. We have used the HttpTrigger attribute which allows the function to be triggered via an HTTP call. The attribute CosmosDB is used to connect to the Azure Cosmos DB. We have defined three parameters for this attribute as described below:
We have decorated the parameter questionSet, which is of type IEnumerable<FAQ> with the CosmosDB attribute. When the app is executed, the parameter questionSet will be populated with the data from Cosmos DB. The function will return the data using a new instance of OkObjectResult.
Remember the Azure cosmos DB connection string you noted earlier? Now we will configure it for our app. Open the local.settings.json file and add your connection string as shown below:
The local.settings.json will not be published to Azure when we publish the Azure Function app. Therefore, we need to configure the connections string separately while publishing the app to Azure. We will see this in action in the latter part of this article.
Press F5 to execute the function. Copy the URL of your function from the Azure Functions runtime output. Refer to the image shown below:
Open the browser and paste the URL in the browser’s address bar. You can see the output as shown below:
Here you can see the data we have inserted into our Azure Cosmos DB.
We have successfully created the Function app, but it is still running in the localhost. Let’s publish the app to make it available globally.
Right-click on the FAQFunctionApp project and select Publish. Select the Publish target as Azure.
Select the specific target as “Azure Function App (windows)”.
In the next window, click on the “Create a new Azure Function…” button. A new Function App window will open. Refer to the image as shown below:
You can fill in the details as indicated below:
Click on the “Create” button to create the Function App and return to the previous window. Make sure the option “Run from package file” is checked. Click on the Finish button.
Now you are at the Publish page. Click on the “Manage Azure App Service Settings” button.
You will see a “Application Settings” window as shown below:
At this point, we will configure the Remote value for the “DBConnectionString” key. This value is used when the app is deployed on Azure. Since the key for Local and Remote environment is the same in our case, click on the “Insert value from Local” button to copy the value from the Local field to the Remote field. Click on the OK button.
You are navigated back to the Publish page. We are done with all the configurations. Click on the Publish button to publish your Azure function app. After the app is published, get the site URL value, append /api/covidFAQ to it and open it in the browser. You can see the output as shown below.
This is the same dataset that we got while running the app locally. This proves that our serverless Azure function is deployed and able to access the Azure Cosmos DB successfully.
We will use the Function app in a Blazor UI project. To allow the Blazor app to access the Azure Function, we need to enable CORS for the Azure app service.
Open the Azure portal. Navigate to “All resources”. Here, you can see the App service which we have created while Publishing the app the in previous section. Click on the resource to navigate to the resource page. Click on CORS on the left navigation. A CORS details pane will open.
Now we have two options here:
We will use the second option for our app. Remove all the previously listed URL and enter a single entry as “*” wildcard. Click on the Save button at the top. Refer to the image shown below:
Open Visual Studio 2019, click on “Create a new project”. Select “Blazor App” and click on the “Next” button. Refer to the image shown below:
On the “Configure your new project” window, put the project name as FAQUIApp and click on the “Create” button as shown in the image below:
On the “Create a new Blazor app” window, select the “Blazor WebAssmebly App” template. Click on the Create button to create the project. Refer to the image shown below:
To create a new razor component, right-click on the Pages folder, select Add >>Razor Component.An “Add New Item” dialog box will open, put the name of your component as CovidFAQ.razor and click on the Add button. Refer to the image shown below:
Open CovidFAQ.razor and put the following code into it.
In the @code section, we have created a class called FAQ. The structure of this class is the same as that of the FAQ class we created earlier in the Azure function app. Inside the OnInitializedAsync method, we are hitting the API endpoint of our function app. The data returned from the API will be stored in a variable called questionList which is an array of type FAQ.
In the HTML section of the page, we have set a banner image at the top of the page. The image is available in the /wwwroot/Images folder. We will use a foreach loop to iterate over the questionList array and create a bootstrap card to display the question and answer.
The last step is to add the link of our CovidFAQ component in the navigation menu. Open /Shared/NavMenu.razor file and add the following code into it.
Remove the navigation links for Counter and Fetch-data components as they are not required for this application.
Press F5 to launch the app. Click on the Covid FAQ button on the nav menu on the left. You can see all the questions and answers in a beautiful card layout as shown below:
You can also check the live app at https://covid19-faq.azurewebsites.net/
In this article, we learned about serverless and its advantage over the traditional 3-tier web architecture. We also learned how the Azure function fits in serverless web architecture.
To demonstrate the practical implementation of these concepts, we have created a Covid-19 FAQ app using the Blazor web assembly and Azure serverless. The questions and answers are displayed in the card layout using Bootstrap.
We have used the Azure cosmos DB as the primary database to store the questions and answers for our FAQ app. An Azure function is used to fetch data from the cosmos DB. We deployed the function app on Azure to make it available globally via an API endpoint.
If you like the article, share with you friends. You can also connect with me on Twitter and LinkedIn.
Originally published at https://ankitsharmablogs.com/
-->To create an Azure file share, you need to answer three questions about how you will use it: Video players for mac yosemite.
What are the performance requirements for your Azure file share?
Azure Files offers standard file shares, which are hosted on hard disk-based (HDD-based) hardware, and premium file shares, which are hosted on solid-state disk-based (SSD-based) hardware.
What size file share do you need?
Standard file shares can span up to 100 TiB, however this feature is not enabled by default; if you need a file share that is larger than 5 TiB, you will need to enable the large file share feature for your storage account. Premium file shares can span up to 100 TiB without any special setting, however premium file shares are provisioned, rather than pay as you go like standard file shares. This means that provisioning a file share much larger than what you need will increase the total cost of storage.
What are your redundancy requirements for your Azure file share?
Standard file shares offer locally-redundant (LRS), zone redundant (ZRS), geo-redundant (GRS), or geo-zone-redundant (GZRS) storage, however the large file share feature is only supported on locally redundant and zone redundant file shares. Premium file shares do not support any form of geo-redundancy.
Premium file shares are available with locally redundancy in most regions that offer storage accounts and with zone redundancy in a smaller subset of regions. To find out if premium file shares are currently available in your region, see the products available by region page for Azure. For information about regions that support ZRS, see Azure Storage redundancy.
For more information on these three choices, see Planning for an Azure Files deployment.
Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares.
Azure supports multiple types of storage accounts for different storage scenarios customers may have, but there are two main types of storage accounts for Azure Files. Which storage account type you need to create depends on whether you want to create a standard file share or a premium file share:
General purpose version 2 (GPv2) storage accounts: GPv2 storage accounts allow you to deploy Azure file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or tables.
FileStorage storage accounts: FileStorage storage accounts allow you to deploy Azure file shares on premium/solid-state disk-based (SSD-based) hardware. FileStorage accounts can only be used to store Azure file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a FileStorage account.
To create a storage account via the Azure portal, select + Create a resource from the dashboard. In the resulting Azure Marketplace search window, search for storage account and select the resulting search result. This will lead to an overview page for storage accounts; select Create to proceed with the storage account creation wizard.
The first section to complete to create a storage account is labeled Basics. This contains all of the required fields to create a storage account. To create a GPv2 storage account, ensure the Performance radio button is set to Standard and the Account kind drop-down list is selected to StorageV2 (general purpose v2).
To create a FileStorage storage account, ensure the Performance radio button is set to Premium and the Account kind drop-down list is selected to FileStorage.
The other basics fields are independent from the choice of storage account:
The networking section allows you to configure networking options. These settings are optional for the creation of the storage account and can be configured later if desired. For more information on these options, see Azure Files networking considerations.
The advanced section contains several important settings for Azure file shares:
The other settings that are available in the advanced tab (blob soft-delete, hierarchical namespace for Azure Data Lake storage gen 2, and NFSv3 for blob storage) do not apply to Azure Files.
Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups. These are optional and can be applied after storage account creation.
The final step to create the storage account is to select the Create button on the Review + create tab. This button won't be available if all of the required fields for a storage account are not filled.
To create a storage account using PowerShell, we will use the New-AzStorageAccount cmdlet. This cmdlet has many options; only the required options are shown. To learn more about advanced options, see the New-AzStorageAccount cmdlet documentation.
To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish, however note that the storage account name must be globally unique.
To create a storage account capable of storing standard Azure file shares, we will use the following command. The -SkuName parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, you must also remove the -EnableLargeFileShare parameter.
To create a storage account capable of storing premium Azure file shares, we will use the following command. Note that the -SkuName parameter has changed to include both Premium and the desired redundancy level of locally redundant (LRS). The -Kind parameter is FileStorage instead of StorageV2 because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account.
To create a storage account using the Azure CLI, we will use the az storage account create command. This command has many options; only the required options are shown. To learn more about the advanced options, see the az storage account create command documentation.
To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish, however note that the storage account name must be globally unique.
To create a storage account capable of storing standard Azure file shares, we will use the following command. The --sku parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, you must also remove the --enable-large-file-share parameter.
To create a storage account capable of storing premium Azure file shares, we will use the following command. Note that the --sku parameter has changed to include both Premium and the desired redundancy level of locally redundant (LRS). The --kind parameter is FileStorage instead of StorageV2 because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account.
Once you've created your storage account, all that is left is to create your file share. This process is mostly the same regardless of whether you're using a premium file share or a standard file share. The primary difference is the quota and what it represents.
For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot go. The primary purpose for quota for a standard file share is budgetary: 'I don't want this file share to grow beyond this point'. If a quota is not specified, standard file share can span up to 100 TiB (or 5 TiB if the large file shares property is not set for a storage account).
For premium file shares, quota is overloaded to mean provisioned size. The provisioned size is the amount that you will be billed for, regardless of actual usage. When you provision a premium file share, you want to consider two factors: 1) the future growth of the share from a space utilization perspective and 2) the IOPS required for your workload. Every provisioned GiB entitles you to additional reserved and burst IOPS. For more information on how to plan for a premium file share, see provisioning premium file shares.
If you just created your storage account, you can navigate to it from the deployment screen by selecting Go to resource. If you have previously created the storage account, you can navigate to it via the resource group containing it. Once in the storage account, select the tile labeled File shares (you can also navigate to File shares via the table of contents for the storage account).
In the file share listing, you should see any file shares you have previously created in this storage account; an empty table if no file shares have been created yet. Select + File share to create a new file share.
The new file share blade should appear on the screen. Complete the fields in the new file share blade to create a file share:
Select Create to finishing creating the new share. Note that if your storage account is in a virtual network, you will not be able to successfully create an Azure file share unless your client is also in the virtual network. You can also work around this point-in-time limitation by using the Azure PowerShell New-AzRmStorageShare cmdlet.
You can create the Azure file share with the New-AzRmStorageShare cmdlet. The following PowerShell commands assume you have set the variables $resourceGroupName and $storageAccountName as defined above in the creating a storage account with Azure PowerShell section.
Important
For premium file shares, the -QuotaGiB parameter refers to the provisioned size of the file share. The provisioned size of the file share is the amount you will be billed for, regardless of usage. Standard file shares are billed based on usage rather than provisioned size.
Note
The name of your file share must be all lowercase. For complete details about naming file shares and files, see Naming and referencing shares, directories, files, and metadata.
Before we can create an Azure file share with the Azure CLI, you must get a storage account key to authorize the file share create operation with. This can be done with the az storage account keys list command:
Once you have the storage account key, you can create the Azure file share with the az storage share create command.
Important
For premium file shares, the --quota parameter refers to the provisioned size of the file share. The provisioned size of the file share is the amount you will be billed for, regardless of usage. Standard file shares are billed based on usage rather than provisioned size.
This command will fail if the storage account is contained within a virtual network and the computer you're invoking this command from is not part of the virtual network. You can work around this point-in-time limitation by using the Azure PowerShell New-AzRmStorageShare cmdlet as described above, or by executing the Azure CLI from a computer that is a part of the virtual network, including via a VPN connection.
Note
The name of your file share must be all lowercase. For complete details about naming file shares and files, see Naming and referencing shares, directories, files, and metadata.