• Blog
  • Info Support
  • Career
  • Training
  • International Group
  • Info Support
  • Blog
  • Career
  • Training
  • International Group
  • Search
logo InfoSupport
  • Latest blogs
  • Popular blogs
  • Experts
      • Alles
      • Bloggers
      • Speakers
  • Meet us
  • About us
    • nl
    • en
    • .NET
    • Advanced Analytics
    • Agile
    • Akka
    • Alexa
    • Algorithms
    • Api's
    • Architectuur
    • Artificial Intelligence
    • ATDD
    • Augmented Reality
    • AWS
    • Azure
    • Big Data
    • Blockchain
    • Business Intelligence
    • Cloud
    • Code Combat
    • Cognitive Services
    • Communicatie
    • Containers
    • Continuous Delivery
    • CQRS
    • Cyber Security
    • Dapr
    • Data
    • Data & Analystics
    • Data Science
    • Data Warehousing
    • Databricks
    • DevOps
    • Digital Days
    • Docker
    • eHealth
    • Enterprise Architecture
    • Hacking
    • Infrastructure & Hosting
    • Innovatie
    • Integration
    • Internet of Things
    • Java
    • Machine Learning
    • Microservices
    • Microsoft
    • Microsoft Bot Framework
    • Microsoft Data Platform
    • Mobile Development
    • Mutation Testing
    • Open source
    • Pepper
    • Power BI
    • Privacy & Ethiek
    • Python
    • Quality Assistance & Test
    • Quality Assurance & Test
    • Requirements Management
    • Scala
    • Scratch
    • Security
    • SharePoint
    • Software Architecture
    • Software development
    • Software Factory
    • SQL Server
    • SSL
    • Start-up
    • Startup thinking
    • Stryker
    • Test Quality
    • Testing
    • TLS
    • TypeScript
    • Various
    • Web Development
    • Web-scale IT
    • Xamarin
    • Alles
    • Bloggers
    • Speakers
Home » CacheDictionary for .Net 3.5, using ReaderWriterLockSlim ?
  • CacheDictionary for .Net 3.5, using ReaderWriterLockSlim ?

    • By Frank Bakker
    • .NET 13 years ago
    • .NET 0 comments
    • .NET .NET
    CacheDictionary for .Net 3.5, using ReaderWriterLockSlim ?

    In my previous post I described how to create a thread safe data cache using PFX. PFX however is scheduled to be released as a part of the .Net framework 4.0 which means we will have to wait a while before we can use this in real world applications. That’s why I created an implementation using the generic Dictionary combined with a ReaderWriterLockSlim which are both available in .Net 3.5. today.

       1: public class CacheDictionary<TKey, TValue>
       2: {
       3:     ReaderWriterLockSlim _cacheLock = new ReaderWriterLockSlim();
       4:     Dictionary<TKey, LazyInit<TValue>> _cacheItemDictionary = new Dictionary<TKey,LazyInit<TValue>>();
       5:  
       6:     public TValue Fetch(TKey key, Func<TValue> producer)
       7:     {
       8:         LazyInit<TValue> cacheItem;
       9:         bool found;
      10:  
      11:         _cacheLock.EnterReadLock();
      12:         try
      13:         {
      14:             found = _cacheItemDictionary.TryGetValue(key, out cacheItem);
      15:         }
      16:         finally
      17:         {
      18:             _cacheLock.ExitReadLock();
      19:         }
      20:  
      21:         if (!found)
      22:         {
      23:             _cacheLock.EnterWriteLock();
      24:             try
      25:             {
      26:                 if (!_cacheItemDictionary.TryGetValue(key, out cacheItem))
      27:                 {
      28:                     cacheItem = new LazyInit<TValue>(producer);
      29:                     _cacheItemDictionary.Add(key, cacheItem);
      30:                 }
      31:             }
      32:             finally
      33:             {
      34:                 _cacheLock.ExitWriteLock();
      35:             }
      36:         }
      37:  
      38:         return cacheItem.Value;
      39:     }
      40: }

    This Implementation uses the (thread safe version of) the double-check locking pattern. This code first checks if the item exists using only a read lock, and only if the item is not found it acquire a write lock. To be sure the item was not added in between it checks again and adds the new item if it is still not found. Assuming there will be a fairly large amount of cache hits compared to cache misses, this is (in theory) more efficient than exclusively locking the dictionary right away, because multiple readers can get items from the cache concurrently and we only block other threads if we need to add a new item.

    Just like in my previous implementation, I did not want to keeping the cache locked while actually retrieving or creating the item that needs to be cached. In the PFX version I did this by using LazyInit<T>, which will also be part of .Net 4.0 (They actually just renamed it to Lazy<T>). Because I wanted the cache to work on .Net 3.5, I created my own implementation of LazyInit<T> which is not as versatile and highly optimized as the .Net 4.0 version, but does the job and is actually quite simple (see code below). While the cache-wide write lock is held, all I do is create an instance of LazyInit<T> and store that instance in the dictionary. After the cache-wide lock is released I call LazyInit.Value which will actually call the delegate to create the item as needed. LazyInit<T> uses a lock as well, but each instance has it’s own lock object, this way each instance will be initialized exactly once, but different items can be initialized concurrently.

       1: public class LazyInit<T>
       2: {
       3:     Func<T> _producer;
       4:     object _lock = new object();
       5:     T _data;
       6:     volatile bool _created;
       7:  
       8:     public LazyInit(Func<T> producer)
       9:     {
      10:         _producer = producer;
      11:     }
      12:  
      13:     public T Value
      14:     {
      15:         get
      16:         {
      17:             if (!_created)
      18:             {
      19:                 lock (_lock)
      20:                 {
      21:                     if (!_created)
      22:                     {
      23:                         _data = _producer.Invoke();
      24:                         _created = true;
      25:                         _producer = null;
      26:                     }
      27:                 }
      28:             }
      29:             return _data;
      30:         }
      31:     }
      32: }

    Actually being quite happy with this implementation which does a minimum of locking while still being thread safe I sat back and relaxed.

    Only A short time after I created this implementation, a blog post by Joe Duffy, the Lead developer / Architect behind PFX, showed up in my feed reader:

    Reader/writer locks and their (lack of) applicability to fine-grained synchronization

     

    In this post Joe compares the ReaderWriterLockSlim to (among others) a normal Mutex better known as the C# lock() statement. On a 4 core machine, when there are mostly readers and only a few writers, the reader/writer lock should theoretically be about 4 times as fast as the mutex. The reader / writer lock will allow multiple reader to execute concurrent on all four cores and the mutex only allows one thread to execute at a time leaving 3 cores idle while the other is holding the lock.

    As it turns out, the mutex outperforms the ReaderWriterLockSlim in many scenario’s, especially in those cases where the locks are held for a relatively short time (like in a single peek in a dictionary). In stead of being 4 times faster the ReaderWriterLockSlim is actually about half as fast as the mutex! It seems that the internal book-keeping that needs to be done by the ReaderWriterLockSlim is way more expensive than the penalty of 3 cores being idle for a short time while 1 thread is holding the lock. Even Joe’s experimental highly optimized spinning ReaderWriterLock does not do a much better job in these scenario’s

    ReaderWriterLockSlim start to outperform the mutex when the lock needs to be kept for a longer time (which you should generally avoid anyway). The number of cores will probably influence the results as well, I think running the same test on a 256 core box will increase the performance of the reader writer lock compared to the mutex, but unfortunately I have not been able to test this.

    To test how this influenced my CacheDictionary, I created yet another implementation using a mutex in stead of the reader writer lock. Compared to the ReaderWriterLockSlim version this code is actually quite simple because it does not need to do the double check locking and the lock() statement does not require an explicit finally block. I did stick to the LazyInit aproach the avoid locking the whole cache while creating the items to cache.

       1: public class CacheDictionary<TKey, TValue>
       2: {
       3:     object _cacheLock = new object();
       4:     Dictionary<TKey, LazyInit<TValue>> _cacheItemDictionary = new Dictionary<TKey,LazyInit<TValue>>();
       5:  
       6:     public TValue Fetch(TKey key, Func<TValue> producer)
       7:     {
       8:         LazyInit<TValue> cacheItem;
       9:  
      10:         lock (_cacheLock)
      11:         {
      12:             if (!_cacheItemDictionary.TryGetValue(key, out cacheItem))
      13:             {
      14:                 cacheItem = new LazyInit<TValue>(producer);
      15:                 _cacheItemDictionary.Add(key, cacheItem);
      16:             }
      17:         }
      18:         return cacheItem.Value;
      19:     }
      20: }

    I ran some simple performance test on my dual core machine to compare this code to the ReaderWriterLockSlim version. My results were similar to Joe’s, the version with the Mutex  is way faster then ReaderWriterLockSlim, even when there are only readers and not a single writer! The actual results depend on a lit of factors, like the number of items in the dictionary and of course the amount of work that is done in the delegate that produces the cache item, but in general the reader/writer version took about 1.5 times as long on the same scenario as the Mutex version. As in many cases: Simplicity wins!

    For the near future, I will stay away from reader writer locks unless it is absolutely necessary to hold a lock for a fair amount of time. Maybe I will revisit this statement when commodity hardware reaches 32 or 64 cores. 

    Joe Duffy concludes his post on this topic with: “Sharing is evil, fundamentally limits scalability, and is best avoided.” While in general I think he is right, the whole point of a cache is to share the result of a previous request with future requests in order to improve performance. Since caching implies sharing you should always be aware of the concurrency issues involved, that is why I tried to handle most of these issues in this generic utility.

    Share this

Frank Bakker

View profile

Related IT training

Go to training website

Related Consultancy solutions

Go to infosupport.com

Related blogs

  • Innovative patterns in software quality management

    Innovative patterns in software quality management Tom van den Berg - 1 month ago

  • Developing a Service Fabric service using another host

    Developing a Service Fabric service using another host Tim van de Lockand - 5 months ago

  • Adding a package to your private WinGet.RestSource feed…

    Adding a package to your private WinGet.RestSource feed… Léon Bouquiet - 10 months ago

Related downloads

  • Beslisboom voor een rechtmatig ‘kopietje productie’

  • Klantreferentie: Remmicom zet wetgeving om in intellige…

  • Klantreferentie RDW: Samenwerken voor veilig en vertrou…

  • Klantreferentie BeFrank: Strategische IT voor een innov…

  • Wie durft te experimenteren met data in de zorg?

Related videos

  • mijnverzekeringenopeenrij.nl

    mijnverzekeringenopeenrij.nl

  • Winnaar | Innovation Projects 2017

    Winnaar | Innovation Projects 2017

  • Explore | Info Support & HAN & Poliskluis

    Explore | Info Support & HAN & Poliskluis

  • LifeApps bij HagaZiekenhuis

    LifeApps bij HagaZiekenhuis

  • Info Support | Bedrijfsfilm

    Info Support | Bedrijfsfilm

Nieuwsbrief

* verplichte velden

Contact

  • Head office NL
  • Kruisboog 42
  • 3905 TG Veenendaal
  • T +31 318 552020
  • Call
  • Mail
  • Directions
  • Head office BE
  • Generaal De Wittelaan 17
  • bus 30 2800 Mechelen
  • T +32 15 286370
  • Call
  • Mail
  • Directions

Follow us

  • Twitter
  • Facebook
  • Linkedin
  • Youtube

Newsletter

Sign in

Extra

  • Media Library
  • Disclaimer
  • Algemene voorwaarden
  • ISHBS Webmail
  • Extranet
Beheer cookie toestemming
Deze website maakt gebruik van Functionele en Analytische cookies voor website optimalisatie en statistieken.
Functioneel
Altijd actief
De technische opslag of toegang is strikt noodzakelijk voor het legitieme doel het gebruik mogelijk te maken van een specifieke dienst waarom de abonnee of gebruiker uitdrukkelijk heeft gevraagd, of met als enig doel de uitvoering van de transmissie van een communicatie over een elektronisch communicatienetwerk.
Voorkeuren
De technische opslag of toegang is noodzakelijk voor het legitieme doel voorkeuren op te slaan die niet door de abonnee of gebruiker zijn aangevraagd.
Statistieken
De technische opslag of toegang die uitsluitend voor statistische doeleinden wordt gebruikt. De technische opslag of toegang die uitsluitend wordt gebruikt voor anonieme statistische doeleinden. Zonder dagvaarding, vrijwillige naleving door uw Internet Service Provider, of aanvullende gegevens van een derde partij, kan informatie die alleen voor dit doel wordt opgeslagen of opgehaald gewoonlijk niet worden gebruikt om je te identificeren.
Marketing
De technische opslag of toegang is nodig om gebruikersprofielen op te stellen voor het verzenden van reclame, of om de gebruiker op een website of over verschillende websites te volgen voor soortgelijke marketingdoeleinden.
Beheer opties Beheer diensten Beheer leveranciers Lees meer over deze doeleinden
Voorkeuren
{title} {title} {title}