As a substitute of 1 occasion per VM, you’re now capable of stack a number of situations behind a Redis proxy. There’s one other large change: Though you continue to use two nodes, each nodes run a mixture of major and duplicate processes. A major occasion makes use of extra sources than a duplicate, so this method helps you to get the absolute best efficiency out of your VMs. On the similar time, this mixture of major and duplicate nodes mechanically clusters knowledge to hurry up entry and allow help for geo-replication throughout areas.
Azure Managed Redis has two completely different clustering insurance policies, OSS and Enterprise. The OSS possibility is similar as utilized by the group version, with direct connections to particular person shards. This works nicely, with close-to-linear scaling, but it surely does require particular help in any shopper libraries you’re utilizing in your code. The choice, Enterprise, works via a single proxy node, simplifying connection necessities for purchasers on the expense of efficiency.
Why would you utilize Redis in an utility? In lots of circumstances it’s a instrument for protecting recurrently accessed knowledge cached in reminiscence, permitting fast learn/write entry. It’s getting used anyplace you want a quick key/worth retailer with help for contemporary options comparable to vector indexing. Utilizing Redis as an in-memory vector index helps maintain latency to a minimal in AI functions based mostly on retrieval-augmented technology (RAG). Cloud-native functions can use Redis as a session retailer to handle state throughout container functions, so AI functions can use Redis as a cache for current output, utilizing it as semantic reminiscence in frameworks like Semantic Kernel.