dotnet watch is a way to immediately trigger a dotnet command when a file changes. The most common uses for this are using it to automatically re-run your application (using dotnet watch run) or automatically re-run your tests (using dotnet watch test) after a file change. This obviously speeds up your workflow so you don’t have to restart your server or your tests manually.
A new feature of dotnet watch run in .NET 5+ is that it will automatically launch a browser and auto-refresh the browser after it detects a change and finishes compiling (if your application has a UI).
What is JetBrains Rider?
JetBrains Rider is a cross-platform .NET IDE from the people at JetBrains (who make many developer productivity tools such as Resharper for Visual Studio, TeamCity, IntelliJ, and more). It is my go-to IDE now, due to all of its productivity enhancements over the base install of Visual Studio. I’ve been a user of Resharper for years, but Resharper and Visual Studio never seemed to play very nice together and ended up slowing down Visual Studio significantly. I put up with it due to all the extra functionality Resharper provided.
With Rider, I get all the benefits of Resharper and it’s fast. I can use Rider on Windows or macOS (which I bounce between for personal and professional work), and a lot of features are included for $150 that I would have to spend thousands to get in Visual Studio Enterprise (such as Continuous Testing).
How do I integrate dotnet watch and Rider?
Alright, now for the part you came here for. Obviously, you could run dotnet watch run directly using the terminal, but it’d be nice to have this as a launch configuration option right in Rider that is only a CTRL + F5 away. Here’s how to do that:
Open your solution in Rider
Select your Configuration and hit Edit Configurations
Click the Plus in the top left to Add New Configuration
Choose Native Executable (Note: you CAN search)
Give it a Name, I called mine dotnet watch
For “Exe path” choose C:\Program Files\dotnet\dotnet.exe if you’re on Windows or /usr/local/share/dotnet/dotnet if you’re on macOS
For “Program arguments” type watch run
For “Working directory” choose the directory that your application’s csproj resides in.
For “Environment variables” You could add ASPNETCORE_ENVIRONMENT=Development if it’s an ASP.NET Core app, but the environment variables defined in your launchSettings.json will take precedence (under the node that contains "command name": "project").
The final output should look like this:
Now start the app with your new configuration selected
That’s it! You’ll notice that your Run tab of Rider will now show your watch command running
I hope this helps someone else. An example of this in action is below:
Note: auto-attaching the debugger does not work with this option in Rider. The issue to track that is here if you want to give that a thumbs up to vote for JetBrains to work on that feature in an upcoming release.
If you have a scenario where you have multiple file types (.pdf, .docx, etc.) stored somewhere (in a database, file system, etc.), that need to be downloaded, you can automatically figure out the Content Type by newing up a FileExtensionContentTypeProvider and call TryGetContentType to get the Content Type and pass that to the Fileresult helper. See lines 8-16 below
A Content Type is how the server tells the browser what type of file the resource being served is. That way the browser knows how to render whether it’s HTML, CSS, JSON, PDF, etc. The way the server does this is by passing a Content-Type HTTP Header. This value of the header will look like “text/html” for HTML, “text/css” for CSS, “application/json” for JSON, “application/pdf” for PDF’s, and so on. A complete list can be found on the IANA official docs.
Note: A Content Type can also be called a MIME type, but because the header is called Content-Type, and ASP.NET Core calls it the Content Type in the code I’m going to be showing, I’m going to call it Content Type for consistency throughout this post.
How do I set the Content Type in ASP.NET Core?
The good news is, for a vast majority of the static files you’re going to serve, the Static Files Middleware will set the Content Type for you. For scenarios where you need to set the Content Type yourself, you can use the FileContentResult in your Controllers or PageModels via the File helper method used on line 11 below.
publicclassFileController : Controller
// Grab a test.pdf back one directory. Look ma – it even runs on Linux with Path.DirectorySeparatorChar!
If you set the wrong Content Type, then you may cause issues for your application. For example, the PDF rendered in the code above will render a PDF in the browser like this:
But what happens if I replace that “application/pdf” string with “application/json” to try to tell the browser the PDF is really JSON? Well… let’s find out:
Well that’s not good. So, setting the correct Content Type is pretty important. (Also, yes I know I need to update Chrome…. don’t judge me. I have a bunch of tabs open in another window that I’m totally going to look at some day, ok?)
Let’s say you have a scenario where you allow admin users to upload files that allows some customer users to download those files. Those admin users can uploads all sorts of file extensions such as a pdf, pptx, docx, xlsx, etc. that customers can then download themselves. This means that you can’t assume and be sure what the Content Type should be, so you need to inspect the file extension to figure it out. No big deal, there are a few ways to solve this, but the simplest is to just write a trusty ol’ switch statement like lines 11-25 below to handle every file type we allow.
The problem with that is trying to maintain a list of all those mappings yourself is annoying, and likely leads you to adding “just one more” when your users want to support another file type you didn’t previously have. It’s also prone to typo’s, because some of these content types are ridiculously convoluted.
Luckily, ASP.NET Core already has maintained this list for us via the FileExtensionContentTypeProvider. So all you have to do is new it up, and call TryGetContentType which acts like a lot of the TryX-out pattern you see sprinkled throughout .NET. It returns a bool and an out variable with the content type. Usage looks like lines 8-16 below:
Updated 2021-02-14: Updated to .NET 5.0 and latest Mailkit
Updated 2020-06-20: Update for new Razor Class Library options for “support pages and views” option that is required for this to work.
Updated 2020-04-18: As of .NET Core 3, ASP.NET Core targets .NET Core directly, instead of .NET Standard. This means you cannot use a .NET Standard Class Library and reference ASP.NET Core. Please see this repo for an example of getting Razor Emails working with .NET Core 3.1. This post has been updated to just reference a .NET Core Library instead of .NET Standard Library.
Usually I don’t blog walkthroughs and instead prefer to go a little deeper on a small topic, but I thought it would be useful to blog our approach on generating HTML emails using Razor for an ASP.NET Core insurance application at work.
HTML emails are theabsolute worst. Inlined styles. Nested tables. Different apps and platforms render the markup differently. It’s an absolute train wreck. However, sending plain text emails isn’t a great way to impress your users and often makes emails look fake or like spam. So fine, HTML emails. Let’s do it.
What I would like to do is create an HTML Email Template library, and let developers focus on the content of their email, and not worry about the extra HTML and CSS voodoo that is HTML Emails.
Also, I want to be able to generate these HTML Emails from within a .NET Class Library, so that the sending of the emails happens right next to all my other business logic. That way I can re-purpose this logic into an ASP.NET Core app or a .NET Console app (such as a Worker Service).
So at a high level, the requirements are:
Create a base Email Layout to enforce a consistent layout (Header, Footer, base styles, etc.) and to hide the complexity of HTML Email Layouts.
Create re-usable HTML Email components (such as buttons) to enforce consistent styling and hide the complexity of the implementation of the HTML Email components.
Be able to call it from a .NET Core Class Library.
Razor checks the box for #1, because it already has the concept of a Layout view and a child view. It also is a good fit for #2, because it lets you re-use UI components via Partials (among other methods). In fact, you can actually achieve #1, #2, and #3 in regular ASP.NET 4.x fairly easily. However, I haven’t ever been able to achieve #4 in regular ASP.NET or pre-2.1 ASP.NET Core. That is, I want to use Razor in a non-ASP.NET/ASP.NET Core setting such as Class Libraries.
It’s super common for applications to put their business logic in a Class Library to remove any dependency on the UI project and to allow that logic to be re-used across other applications. However, when I tried to send HTML Emails from a Class Library in ASP.NET 4.x and ASP.NET Core pre-2.1, I couldn’t figure out how to get the Class Library to find the Views I was creating and ultimately I gave up.
Enter Razor Class Libraries
I won’t go into much detail about Razor Class Libraries, when the documentation already does a fantastic job, but the basic idea behind them is that you can share UI between multiple ASP.NET Core applications. This UI can be Views, Controllers, Razor Pages, etc.
The simplest way to think about Razor Class Libraries is if you add a View in your Razor Class Library, it essentially gets copied down into that same relative path into your main application. So if you add an Index.cshtml file to the /Views/Home path of your Razor UI Class Library, then that file will be available at /Views/Home/Index.cshtml of your ASP.NET Core Application. That’s pretty sweet and useful. But the question is would it find those files in a normal .NET Core Class Library? The answer is – yes.
One potential gotcha:Make sure that your Views have unique names/paths. If you make an HTML Email view that matches the path of an actual like MVC View or Razor Page, then they will conflict. Therefore, I try to make my folder and/or view names clearly unique like “ConfirmAccountEmail.cshtml” which is unlikely to have a matching route on my ASP.NET Core application.
After you’ve created your Razor Class Library, delete out the Areas folder, because we won’t need it.
Create the RazorViewToStringRenderer class
Tucked away in the aspnet/Entropy GitHub repo is the RazorViewToStringRenderer which shows you how to take a Razor View and render it to an HTML string. This will be perfect for taking our Razor View, converting it to a string of HTML, and being able to stick the HTML into the body of an email.
Add this class to your Razor Class Library you just created. I tucked mine under a folder called Services and then created an Interface for it called IRazorViewToStringRenderer:
The next step in the process is we are going to leverage Razor’s Layout system to create a base HTML Email Layout. There are many HTML Email Layout templates out there so you don’t have to write one yourself (and trust me… if you’ve never seen HTML Email code before, you’re not going to want to write this yourself).
I’m picking this one from Litmus, which is the leading vendor (as far as I know) for testing your HTML Emails on multiple different platforms, devices, and applications. No they didn’t pay me to say that (although if someone from Litmus is reading this, it’d be cool if you did).
The layout looks like this:
However, all I really care about for the layout is everything outside of the white box for my layout. Inside the white box will change based on whatever I’m sending (Email Registration Confirmations, Forgot Password requests, Password Updated Alerts, etc.).
In your Razor UI Class Library, create a /Views/Shared folder
When you finish throwing up in your mouth, let’s look at a couple important bits. Line 165 has the RenderBody() call which is where the white box is and where our child view will be placed dynamically based on the email I’m sending.
Another interesting spot is line 142 where I’m dynamically pulling an EmailTitle property from ViewData. ViewData allows us to pass messages from a child view up to the parent view. In the email screenshot above, this is the “Welcome” hero text.
In a real application, I would have pulled that Welcome text down to the child view as well, but I left that as a demonstration of the ability for the child view to dynamically change the parent EmailLayout. Some more examples of what you could do with ViewData could be the child view wants to dictate which Logo to use or what color the background is, or literally anything you want. This is simply just leveraging a feature that’s been part of MVC for a long time.
Now that we’ve finished the Email Layout, let’s look at adding a custom button component.
Create a HTML Email Button Partial for re-usability
The next thing I want to do is start to make reusable components via partial views in order to abstract away the complexity of certain HTML Email components, as well as always providing a consistent look to the end user.
A good example of this is the button:
All that really needs to be dynamic about this is the text and the link. That should be easy enough.
Under /Views/Shared add a EmailButtonViewModel.cs class
Right now is where it all starts to come together, and you can see the power of being able to use Razor to build out our HTML emails. On our day-to-day emails that we build out for the rest of our application, we no longer have to worry about gross table syntax or inline style craziness, we can just focus on the custom content that makes up that HTML Email.
Create a .NET Core Class Library
But one of the coolest things about this, is the fact that we can call this code from regular .NET Core) Class Libraries, and have it all “just work.” This means we can share our email templates across multiple applications to provide the same look and feel across all of them, and have all that logic live right next to the rest of our business logic.
So let’s create a .NET Core Class Library. If you want you can create a .NET Core Library and it’ll work as well.
The interesting bits are lines 25, where we create our ConfirmAccountEmailViewModel with the link we want to use and line 27 where we pass that model into our RazorViewToStringRenderer, and get back our Email’s HTML body.
As an aside, for testing local emails without an email server, I love using Papercut. It’s a simple install and then you’re up and going with an email server that runs locally and also provides an email client. No Papercut didn’t pay me to say that either. It’s free. If some paid service that does the same thing as Papercut does and wants to sponsor this blog, feel free to reach out to me and get rejected because you will never get me to give up Papercut.
Hook it up to the UI
The last step we have to do is to hook this up to the UI. I’m just going to hook this up on a GET request of the /Index page just for demo purposes. In reality, you’d have an input form that takes registration and have this happen on the POST.
In your .AspNetCore project, add a project reference to .Common
In Startup.cs under ConfigureServices wire up the IRegisterAccountService and the IRazorViewToStringRendererto their implementations.
And you’re done! When I run the app, and open up Papercut, I get my email in all its glory.
Going forward, anytime I need an email, I can simply leverage the EmailLayout and EmailButton infrastructure I’ve created to make creating HTML emails incredibly easy and less table-y which is a huge win.
It’s extremely easy to increase the number of iterations in the default ASP.NET Core Identity PasswordHasher. ASP.NET Core Identity will also take care of rehashing the password if it was previously hashed with a lower iteration count, so you can increase this at any time. However, test the performance of the login page of your application before changing this number, to make sure you don’t set it too high.
I’m not going to cover why you should be hashing passwords in your application. Presumably, if you’ve landed on this blog post, you already know why hashing passwords is infinitely more secure than encrypting them (or worse… storing them in plaintext). Andrew Lock goes into detail why if you’re looking for that information. Instead, I’m going to talk about hashing iterations, specifically.
Iterations is the “work factor” for how many times you hash a password before you store it in your database (if you’re not using something like Auth0 that stores your application’s password hashes for you). In the default ASP.NET Core Identity implementation, you would take a password and hash it once. Then you hash the hash, then you hash that double hash, and so on until you’ve hashed the password 10,000 times.
The more times you hash the password, the longer it takes your CPU to complete that operation, and therefore the longer it would take someone to brute force the password, if someone were to ever get your password hash. So if someone malicious were to ever compromise your database containing your application’s usernames and password hashes (it’s not like that’s ever happened before), the higher your iteration count is, the harder it is for them to crack your password hashes. The goal here is to buy time, because eventually (given enough time), they will crack a user’s password hash. The more time you have to notify a user and give them a chance to change their password, the better.
So if you increase the number of iterations from 10,000 to 100,000, you are making it 10x harder for someone to crack your hash, because their CPU/GPU has 10x more work to do.
While 10,000 iterations may (or may not) be enough for your application today, as hardware gets faster and faster, there will come a day when 10,000 iterations with this algorithm will certainly not be enough. We’ve already seen weaker algorithms get cracked like MD-5 and SHA-1. Therefore, it is critical that we’re able to change this at some point to maintain the proper security for our application.
Nope, it won’t break your existing users! That’s the cherry on top. The iteration count is also stored as part of the password hash in the Identity database (more details here).
For example, let’s say you started your application with the default of 10,000 iterations, and then later you decided to increase it to 25,000. When a user provides a valid username and password combination to your login page/endpoint, the PasswordHasher checks to see how many iterations the current password is hashed with. It then checks to see if that database-stored iteration count is less than PasswordHasherOptions.IterationCount. In this case, the hash is stored in the database with an iteration count of 10,000 and the PasswordHasherOptions.IterationCount is 25,000. Therefore, the PasswordHasher will rehash the password using 25,000 iterations and save it back to the database. This allows you to progressively upgrade your site to a stronger iteration count.
Obviously, if a user never logs in again, their password will be stored at the lower iteration count forever, which may be a problem. There are ways to solve this problem, but that’s outside the scope of this post.
NOTE: It was mentioned subtly, so I’m going to call it out again. It will only re-hash the password if the database-stored iteration count is LESS than the PasswordHasherOptions.IterationCount. So if you do something silly like set the PasswordHasherOptions.IterationCount to 1 trillion temporarily, and then reset it back to 10,000 later, every user who logged into your site when it was 1 trillion will NEVER be downgraded to 10,000. This will likely be a performance problem and leave you in a world of hurt. More on performance next.
Like anything in software, there are always trade-offs. There is no free lunch. If you increase the number of iterations, then your server(s) will have to do extra work every time you log someone in. So you probably don’t want to set this number to something like 1 trillion, because your server probably won’t return in a timely manner. As mentioned in bold above, if you set this iteration count to a high number and then later lower it, the PasswordHasher will NOT downgrade the password hash for you. So you will need to do appropriate stress testing on what your environment can handle.
As always, there’s a trade-off between security and convenience. You can’t have both. You will likely need to find a happy medium between a strong iteration count and how long it takes a user to login. I’m not a security expert, but Brock Allen (who is a security expert) said in 2014 that – “The general consensus is that it should take about one second to compute a password hash.” In that same article, Brock gives a formula that says we should be using 512,000 iterations in 2018. Using 512,000 iterations running on my laptop’s 8th gen i7 results in a login process that takes ~900ms on average. Again – do what makes sense for your scenario and make sure you test appropriately.
If you’re using something like Auth0, Okta, Azure B2C, any other cloud identity-as-a-service solution, or you’re exclusively using Social Logins, then you likely won’t need to worry about any of this. They are choosing an iteration count/work factor/algorithm for you. However, there are still many apps who store password hashes in their own proprietary databases, and I didn’t see a post on how to configure this, so that’s why I typed this up.
As mentioned a few times in this post, I am not a security expert and I’ve tried to stray away from giving specific advice in this post. Do what makes sense for your application.
If you have those options listed, that shouldn’t come as much of a surprise. The ASP.NET Core templates have these options explicitly listed by default, so I would expect most people who use the Bundler & Minifier to follow suit.
If you want to verify that the bundles are the same, it’s easy to test it out in Visual Studio.
Go to the Task Runner Explorer in Visual Studio
Go to bundleconfig.json
Right-click on Update all files
Then check to see if your version control picks up any changes. Spoiler: it won’t. 🙂
Inject in an IHealthCheckService and call CheckHealthAsync to run all your Health Checks, or another method like RunCheckAsync or RunGroupAsync to run a subset of the Health Checks you registered in ConfigureServices.
Health Checks are pretty much what the name implies. They are a way of checking to see if your application is healthy. Health Checks have become especially critical as more and more applications are moving to a microservice-style architecture. While microservice architectures have many benefits, one of the downsides is there is a higher operations overhead to ensuring all of these services are running. Rather than monitoring the health of one majestic monolith, you need to monitor the status of many different services, which are usually responsible for one thing and one thing only. Health Checks are usually used in combination with a service discovery tool such as Consul that monitor your microservices for when they become healthy and unhealthy. If you use Consul for service discovery as well, Consul will automatically route traffic away from your unhealthy microservices and only serve traffic to your healthy microservices… which is awesome.
How do I implement a Health Check?
There are a few different ways to do Health Checks, but the most common way is exposing an HTTP endpoint to your application dedicated to doing Health Checks. Typically you will return a status code of 200 if everything is good, and any non-2xx code means something went wrong. For example, you might return a 500 if something went wrong along with a JSON payload of what exactly went wrong.
Common scenarios to Health Check
What you Health Check will be based on what your application/microservice does, but some common things:
Can my service connect to a database?
Can my service query a 3rd party API?
Likely making some read-only call
Can my service access the file system?
Is the Memory and/or CPU above a certain threshold?
Looking at the Microsoft.AspNetCore.HealthChecks package
Microsoft is on the verge of shipping a set of Health Check packages to help you solve this problem in a consistent way. If you look in the GitHub repo you will notice there is also a package for ASP.NET 4.x as well under the Microsoft.AspNet.HealthChecks namespace. There is a samples folder on that GitHub repo that contains how to wire that up if you’re interested in ASP.NET 4.x. I’m going to focus on the ASP.NET Core package for this blog post.
The Microsoft.AspNetCore.HealthChecks package targets netcoreapp1.0, but I suspect this will change to be either netcoreapp2.0 or netstandard2.0 by the time this RTM’s. The ASP.NET 4 project targets net461, and all the other libraries target netstandard1.3which works with both .NET Core and Full Framework.
The basic flow is that you register your health checks in your IoC container of choice (such as the built-in Microsoft one, although I prefer SimpleInjector due to the fantastic feature set, blazing speed, and ridiculously good documentation, but I’ll just use the built-in one for these demos). You register these Health Checks via a fluent HealthCheckBuilder API in your Startup‘s ConfigureServices method. This HealthCheckBuilder will build a HealthCheckService and register it as an IHealthCheckService in your IoC container.
You get back a CompositeHealthCheckResult which is a summary of all of your health checks that you registered in your AddHealthChecks method in your Startup class.
That CompositeHealthCheckResult class has a CheckStatus property which is an enum. That enum has 4 options – Healthy, Unhealthy,Warning, and Unknown. You can determine what you want to do with each of those. In my simple example above, I consider anything other than Healthy to be a problem and return a 500 if it’s not Healthy.
You can also loop over the results of the CompositeHealthCheckResult by looking at the Results property and get even more detail about what exactly happened.
You can optionally run a single Health Check by calling RunCheckAsync and supplying the name of the Health Check that you registered in your ConfigureServices method (more on that later).
Out of the box Health Checks
Microsoft ships quite a few Health Checks out of the box that fit into the Common Scenarios section above. They are:
URL Health Check via AddUrlCheck
SQL Server Health Check via AddSqlCheck
PrivateMemorySizeCheck via AddPrivateMemorySizeCheck
VirtualMemorySizeCheck via AddVirtualMemorySizeCheck
WorkingSetCheck via AddWorkingSetCheck
A few Azure Health Checks (such as BLOB Storage, Table Storage, File Storage, and Queue storage).
Let’s take a look at the URL Health Check and the SQL Server Health Check.
URL Health Checks
The URL Health Check lets you specify a URL and then it will execute a GET to that URL and see if the URL returns a Success Status Code or not (any 2xx Status Code like 200).
You can register the URL Health Check by adding this.
Then you inject your IHealthCheckService and call CheckHealthAsync as shown above. If you want to just run this single Health Check, and not others you may have registered, you’ll need to know that the name is not configurable. The name will be UrlCheck(https://github.com). So you would run that single check with RunCheckAsync("UrlCheck(https://github.com)").
Another thing to note, that second parameter where I’m passing TimeSpan.FromMilliseconds(1) is the CacheDuration of the HealthCheckResult. The default is 5 minutes. So if you have some other service (like Consul) pinging your Health Check endpoint every minute, the HealthCheckResult will be the same for 5 minutes until the CacheDuration expires. To me, that doesn’t make a ton of sense, and I don’t want to risk an up to 5 minute delay on being notified when my service becomes unhealthy. So by only adding a 1 millisecond cache, I’m effectively adding no caching at all.
There is also another parameter to the AddUrlCheck method where you can pass a Func to the URL Checker. This is nice in scenarios such as:
You want to execute something other than a GET.
You need to do something special with the HttpRequest in general such as add Auth Headers or something.
You want to validate the response’s Content contains some specific words or HTML.
So the URL Check should satisfy just about any Web check you could possibly want to do with that flexibility.
Built-in SQL Server Health Checks
The SQL Check lets you specify a name and a connection string to connect to.
The first parameter, “SQL DB Check”, is just the name I chose. You can make it whatever makes sense to you. To run this check, as mentioned above, you would call this from IHealthCheckService.
Making your own custom Health Check
You can of course make your own custom Health Check. For me, most of my use cases are solved by the built-in ones, as I’m usually checking if an API is available (which I could do with the URL Check and overriding the checkFunc parameter) or I’m checking to see if a SQL Server is available. But you could implement your own if you are missing some functionality that you need such as checking if another DB store is available or how much free space a drive has.
To do that, derive from IHealthCheck and implement the interface. Below is an example of one that checks to make sure the C drive has at least 1 GB of free space.
Then in your ConfigureServices method, register the custom Health Check with the lifestyle that makes sense for the Health Check (Singleton, Scoped, Transient, etc.) and then add it to the AddHealthChecks registration that we’ve done before.
You can group your health checks together in a HealthCheckGroup if you want (such as all performance checks like CPU, Memory, Disk Space, etc. go under a group called “performance”) or you can let them live on their own and mix and match.
This enables you to do things like only call that Group of Health Checks via the RunGroupAsync method off of IHealthCheckService.
Reminder – this is demo code. Some flaws include that my Health Check endpoint is unsecured and anyone can hit the endpoint. You will likely want to secure your Health Check endpoint, especially if it is on the Internet, so someone doesn’t spam your Health Check endpoint. There are many ways to do this, but are outside the scope of this blog post.
Some Feedback on the Design
Overall I think this abstraction is really useful, and I will use it myself once it RTM’s. The built-in health checks are nice, so that you don’t have to write that logic yourself. I’m all about punting as much logic onto someone else as possible.
There are some little things I wish were a little easier though.
It seems like the HealthCheckResult.CheckStatus == CheckStatus.Healthy code is going to be extremely common. It’d be nice if there was a helper prop off of HealthCheckResult like HealthCheckResult.IsHealthywhich does that computation for you. Much like HttpResponseMessage has a IsSuccessStatusCode property which is super useful. Although, I understand that “Healthy” is a relative term that’s tough to globally define. Some people might think that the CheckStatus.Warning would qualify as being Healthy and others wouldn’t. Ubiquitous languages are hard.
I wish there was a way you could override the name for the built-in Health Checks. Like the URL Health Check automatically takes on the Name UrlCheck(yourUrlHere) such as UrlCheck(http://google.com). You’ll need this name if you want to pull out the specific results of a Health Check. I had expected to be able to specify the name of each Health Check and store it in something like a HealthCheckConstants class for easily retrieval. Instead, I need to follow this convention when using the constants class, which isn’t the end of the world, but being able to override the name would be nice.
When calling the RunGroupAsync method, I wish you could just specify the group name rather than the HealthCheckGroup instance and let the RunGroupAsync method handle getting the HealthCheckGroup instance.
There should be no Cache Duration on the URL Check. 5 minutes is just too long of a default, and IMO there should be no Cache Duration at all on the URL Check. I control the frequency of how often my Health Check monitoring service hits my health check endpoint. If I want it to check my service every minute, then I’d probably expect the Health Check result to be fresh every time and not be cached.
Overall, I really like this package, and it seems like it’s going to be really useful. I plan on using this when it RTM’s, so I’ll keep this post up to date when I see they make changes to this package.
If you’ve ever seen this message when hitting your ASP.NET Core app:
“An error occurred while starting the application. .NET Framework <version number> | Microsoft.AspNetCore.Hosting version <version number> | Microsoft Windows <version number>”
It looks a little something like this:
It basically means something reallybad happened with your app. Some things that might have gone wrong:
You might not have the correct .NET Core version installed on the server.
You might be missing DLL’s
Something went wrong in your Program.cs or Startup.cs before any exception handling kicked in
Event Viewer (probably) won’t show you anything
If you’re running on Windows and behind IIS, you might immediately go to the Event Viewer to see what happened based on your previous ASP.NET knowledge. You’ll notice that the error is not there. This is because Event Logging must be wired up explicitly and you’ll need to use the Microsoft.Extensions.Logging.EventLog package, and depending on the error, you might not have a chance to even catch it to log to the Event Viewer.
How to figure what happened (if running on IIS)
Instead of the Event Viewer, if you’re running behind IIS, we can log the request out to a file. To do that:
UPDATE: Looks like this was fixed on 4/10/17 in the Update 1 26403.03 Release Notes. I would just upgrade to get the fix.
I got an error when upgrading to Visual Studio 2017 15.1 where it wouldn’t install the .NET Core workload. This prevented me from opening ASP.NET Core projects.
The product failed to install the listed workloads and components due to one or more package failures.
Incomplete workloads .NET Core cross-platform development (Microsoft.VisualStudio.Workload.NetCoreTools,version=15.0.26323.1) ASP.NET and web development (Microsoft.VisualStudio.Workload.NetWeb,version=15.0.26323.1)
Incomplete components .NET Core 1.0 – 1.1 development tools (Microsoft.NetCore.ComponentGroup.Web,version=15.0.26208.0) .NET Core 1.0.1 development tools (Microsoft.Net.Core.Component.SDK,version=15.0.26208.0) Container development tools (Microsoft.VisualStudio.Component.DockerTools,version=15.0.26323.1)
You can search for solutions using the information below, modify your selections for the above workloads and components and retry the installation, or remove the product from your machine.
Following is a collection of individual package failures that led to the incomplete workloads and components above. To search for existing reports of these specific problems, please copy and paste the URL from each package failure into a web browser. If the issue has already been reported, you can find solutions or workarounds there. If the issue has not been reported, you can create a new issue where other people will be able to find solutions or workarounds.
Package ‘Microsoft.Net.Core.SDK,version=15.0.26323.1,chip=x64’ failed to install. Search URL: https://aka.ms/VSSetupErrorReports?q=PackageId=Microsoft.Net.Core.SDK;PackageAction=Install;ReturnCode=1638 Impacted workloads .NET Core cross-platform development (Microsoft.VisualStudio.Workload.NetCoreTools,version=15.0.26323.1) ASP.NET and web development (Microsoft.VisualStudio.Workload.NetWeb,version=15.0.26323.1) Impacted components .NET Core 1.0 – 1.1 development tools (Microsoft.NetCore.ComponentGroup.Web,version=15.0.26208.0) .NET Core 1.0.1 development tools (Microsoft.Net.Core.Component.SDK,version=15.0.26208.0) Container development tools (Microsoft.VisualStudio.Component.DockerTools,version=15.0.26323.1) Log C:\Users\scotts\AppData\Local\Temp\dd_setup_20170406164409_003_Microsoft.Net.Core.SDK.log Details Command executed: “C:\ProgramData\Microsoft\VisualStudio\Packages\Microsoft.Net.Core.SDK,version=15.0.26323.1,chip=x64\dotnet-dev-win-x184.108.40.206.exe” “C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise” /log “C:\Users\scotts\AppData\Local\Temp\dd_setup_20170406164409_003_Microsoft.Net.Core.SDK.log” /quiet /norestart Return code: 1638 Return code details: Another version of this product is already installed. Installation of this version cannot continue. To configure or remove the existing version of this product, use Add/Remove Programs on the Control Panel.
I ended up fixing this by:
Repairing Microsoft Visual C++ 2017 x64, then rebooting.
Repairing Microsoft Visual C++ 2017 x86, then rebooting.
Installing the missing features by opening up a ASP.NET Core project and right-clicking and clicking “Install Missing Features.” I’m sure you can get there by “modifying” the VS 2017 installation as well.