Increasing Password Hashing Iterations with ASP.NET Core Identity

tldr;

It’s extremely easy to increase the number of iterations in the default ASP.NET Core Identity PasswordHasher.  ASP.NET Core Identity will also take care of rehashing the password if it was previously hashed with a lower iteration count, so you can increase this at any time.  However, test the performance of the login page of your application before changing this number, to make sure you don’t set it too high.

 

Password Hashing and Salting Basics

I’m not going to cover why you should be hashing passwords in your application.  Presumably, if you’ve landed on this blog post, you already know why hashing passwords is infinitely more secure than encrypting them (or worse… storing them in plaintext).  Andrew Lock goes into detail why if you’re looking for that information.  Instead, I’m going to talk about hashing iterations, specifically.

 

ASP.NET Core Identity Default – 10,000 iterations

As Andrew mentions in his blog post linked above, the default password hashing algorithm is PBKDF2 with HMAC-256, a 128-bit salt, 256-bit subkey, and 10,000 iterations.  The 10,000 iterations is the part we’re interested in for this blog post.  In June 2017, the NIST recommended at least 10,000 iterations on the 25th page of their report under section 5.1.1.2, so this is a reasonable default.

 

Why do iterations matter?

Iterations is the “work factor” for how many times you hash a password before you store it in your database (if you’re not using something like Auth0 that stores your application’s password hashes for you).  In the default ASP.NET Core Identity implementation, you would take a password and hash it once.  Then you hash the hash, then you hash that double hash, and so on until you’ve hashed the password 10,000 times.

The more times you hash the password, the longer it takes your CPU to complete that operation, and therefore the longer it would take someone to brute force the password, if someone were to ever get your password hash.  So if someone malicious were to ever compromise your database containing your application’s usernames and password hashes (it’s not like that’s ever happened before), the higher your iteration count is, the harder it is for them to crack your password hashes.  The goal here is to buy time, because eventually (given enough time), they will crack a user’s password hash.  The more time you have to notify a user and give them a chance to change their password, the better.

So if you increase the number of iterations from 10,000 to 100,000, you are making it 10x harder for someone to crack your hash, because their CPU/GPU has 10x more work to do.

While 10,000 iterations may (or may not) be enough for your application today, as hardware gets faster and faster, there will come a day when 10,000 iterations with this algorithm will certainly not be enough.  We’ve already seen weaker algorithms get cracked like MD-5 and SHA-1.   Therefore, it is critical that we’re able to change this at some point to maintain the proper security for our application.

 

How can I change it?

As mentioned in the tldr; – it’s just a one-liner to change it in ASP.NET Core Identity, which is a HUGE upgrade from regular ASP.NET Identity where you had to provide a whole new PasswordHasher implementation.  Simply go to ConfigureServices and configure PasswordHasherOptions.IterationCount and that’s it!

 

But won’t this break my existing users?

Nope, it won’t break your existing users!  That’s the cherry on top.  The iteration count is also stored as part of the password hash in the Identity database (more details here).

For example, let’s say you started your application with the default of 10,000 iterations, and then later you decided to increase it to 25,000.  When a user provides a valid username and password combination to your login page/endpoint, the PasswordHasher checks to see how many iterations the current password is hashed with.  It then checks to see if that database-stored iteration count is less than PasswordHasherOptions.IterationCount.  In this case, the hash is stored in the database with an iteration count of 10,000 and the PasswordHasherOptions.IterationCount is 25,000.  Therefore, the PasswordHasher will rehash the password using 25,000 iterations and save it back to the database.  This allows you to progressively upgrade your site to a stronger iteration count.

If you’re curious how this works, check out the source code.

Obviously, if a user never logs in again, their password will be stored at the lower iteration count forever, which may be a problem.  There are ways to solve this problem, but that’s outside the scope of this post.

NOTE: It was mentioned subtly, so I’m going to call it out again.  It will only re-hash the password if the database-stored iteration count is LESS than the  PasswordHasherOptions.IterationCount.  So if you do something silly like set the PasswordHasherOptions.IterationCount to 1 trillion temporarily, and then reset it back to 10,000 later, every user who logged into your site when it was 1 trillion will NEVER be downgraded to 10,000.  This will likely be a performance problem and leave you in a world of hurt.  More on performance next.

 

Performance considerations

Like anything in software, there are always trade-offs.  There is no free lunch.  If you increase the number of iterations, then your server(s) will have to do extra work every time you log someone in.  So you probably don’t want to set this number to something like 1 trillion, because your server probably won’t return in a timely manner.  As mentioned in bold above, if you set this iteration count to a high number and then later lower it, the PasswordHasher will NOT downgrade the password hash for you.  So you will need to do appropriate stress testing on what your environment can handle.

As always, there’s a trade-off between security and convenience.  You can’t have both.  You will likely need to find a happy medium between a strong iteration count and how long it takes a user to login.  I’m not a security expert, but Brock Allen (who is a security expert) said in 2014 that – “The general consensus is that it should take about one second to compute a password hash.”  In that same article, Brock gives a formula that says we should be using 512,000 iterations in 2018.  Using 512,000 iterations running on my laptop’s 8th gen i7 results in a login process that takes ~900ms on average.  Again – do what makes sense for your scenario and make sure you test appropriately.

 

Final thoughts

If you’re using something like Auth0, Okta, Azure B2C, any other cloud identity-as-a-service solution, or you’re exclusively using Social Logins, then you likely won’t need to worry about any of this.  They are choosing an iteration count/work factor/algorithm for you.  However, there are still many apps who store password hashes in their own proprietary databases, and I didn’t see a post on how to configure this, so that’s why I typed this up.

As mentioned a few times in this post, I am not a security expert and I’ve tried to stray away from giving specific advice in this post.  Do what makes sense for your application.

 

Hope this helps!

Clean Up Your bundleconfig.json By Embracing The Defaults

If you’re using the Bundler & Minifier package in ASP.NET Core (instead of something like WebpackGulp, or the new hotness Parcel) and your bundleconfig.json has these properties set on your JavaScript files:

You can remove those, because those are the default.  On a project I’m working on that uses it,  it shaved ~50 LOC on our bundleconfig.json which kept things less noisy.

Before:

After:

 

If you have those options listed, that shouldn’t come as much of a surprise.  The ASP.NET Core templates have these options explicitly listed by default, so I would expect most people who use the Bundler & Minifier to follow suit.

 

If you want to verify that the bundles are the same, it’s easy to test it out in Visual Studio.

  1. Go to the Task Runner Explorer in Visual Studio
  2. Go to bundleconfig.json
  3. Right-click on Update all files
  4. Click Run

Then check to see if your version control picks up any changes.  Spoiler: it won’t.  🙂

 

Hope this helps!

Using ASP.NET Core TestServer + EF Core In Memory DB Together

I haven’t seen much about using the ASP.NET Core TestServer and EF Core’s InMemoryDb together, despite them being a natural fit, so I thought I’d blog about it.  There are a few different ways you could achieve this (more options at the bottom of the post), this is just a simple solution.

 

tldr;

  1. Create a custom environment called Testing.
  2. In your Startup.cs, add a check to see if you’re in the Testing Environment and if so use the InMemoryDb provider.
  3. In your TestServer setup, set the environment to Testing.
  4. Grab the DbContext from TestServer via server.Host.Services.GetService(typeof(ApplicationDbContext)) as ApplicationDbContext and add the data you want there.
  5. You now have an in memory web server and an in memory database working together!

Full Code here on GitHub

Quick look at final Test class:

 

 

Test Server

Unit tests test a small piece of the raw logic of your code in isolation of any web server, database, file system, etc.  Instead of using a real database, you would use a fake or mock database.

Integration tests on the other hand, are a way of doing automated tests that integrate various components of your app together.  It might use a combination of a real web server, real web service, real database, real browser, etc.  ASP.NET Core gives us an easy way to write integration tests by way of an object called TestServer.

TestServer allows you to boot up an in memory web server with your full middleware pipeline, full set of dependencies registered, etc.  This enables you to do things like hit a route and make sure the server returns the response you expect.

For more information on Test Server, checkout the official docs.

 

EF Core InMemory Database

EF Core has a provider that lets you run against an in memory database.  The intent of this provider is purely for testing purposes. 

This has a few benefits:

  1. The reads and writes are very fast.
  2. You don’t have to worry about having a real database to run your tests against.
  3. You don’t have to worry about cleaning up your data after your tests are done.  When the tests stop, your in memory database goes away.

For more information on InMemory database, checkout the official docs.

 

Using these two together

By now you may be thinking.  “Cool, so I can create an in memory web server and an in memory EF database…. can I combine the two?”  The answer is most definitely, yes!  I hadn’t seen much out on the Interwebs about this, so hence this blog post.

Let’s assume we have an API endpoint that looks like this:

 

It’s pretty straight forward, we inject in a DbContext into the Controller (NOTE: DEMO CODE) and then have an endpoint that takes in an ID and will return the appropriate response based on what the database returns.

So what I’d like to do is swap out the real database here for the in memory database.  Let’s do it!

 

First in your ASP.NET Core Application:

  1. In our Startup.cs, let’s inject an IHostingEnvironment in the constructor and save it to a field/property.
  2. In ourConfigureServices, let’s check the Environment to see if the current environment is a custom one called “Testing.”
  3. If so, then rig up the In Memory Database via services.AddDbContext<ApplicationDbContext>(options => options.UseInMemoryDatabase("TestingDB"));
  4. Note: TestingDB is just a unique name given to your In Memory Database.
  5. So it should look like:

 

Next in your Unit Testing project (I’m using xUnit, but you could use NUnit or MSTest):

1.Create a new Test Class

a. I follow the convention <ClassUnderTest><MethodUnderTest> for the class name.

b. In this case: ApplicationUsersControllerGetApplicationUser

2. Create a new constructor and set up a WebHostBuilder with the following code.  The two biggest pieces are line 9 where we set the Environment to Testing (to match line 14 above where we check the Environment), and line 13 where we grab the ApplicationDbContext from the ServiceProvider and save that off.

3. Create a new test method

a. I follow the convention Does<Something>_Given<Scenario>.

b. In this case: DoesReturnOk_GivenUserExists

4. Create a user, add it to the context, and save it.  Then use the HttpClient we created and saved off above to query our endpoint and ask for the user’s ID.

 

That’s it!  Now run your test and you will have successfully booted an in memory web server with an in memory database.

 

Now that we’ve tested the Ok path, we should probably test the NotFound path.  That’s simple to do as well:

The entire GitHub code can be found here: https://github.com/scottsauber/TestServerAndInMemoryDbDemo.  The two interesting parts are the Startup and the Tests.

 

Some “Before You Copy + Paste” Caveats

  1. If you don’t like polluting your “Production” Startup class with Testing concerns (which is a very valid concern), you could create a Test only Startup class.   has a great blog post on how to do this near the bottom of his post.  On your TestServer then just swap out .UseStartup<Startup>() with .UseStartup<YourTestStartupClassHere>().
  2. In no way am I advocating for injecting in your DbContext directly into your Controller.  This is purely just demo code and to remove layers to make it as easy to reason about as possible.  Likely you will have at least one “layer” in between your Controller and your DbContext.
  3. Note that this will be doing a real integration test and I am ONLY swapping out the DB here.  If you need to swap out other components (like external services), you will also have to do that yourself via the
  4. You will want to limit your creation of the TestServer and HttpClient, due to they are mildly expensive (~100-200ms on my machine), and we want our tests to be as fast as possible.  If using xUnit, look into IClassFixtures.

 

Hope this helps!