Where is my cache? Architectural patterns for caching microservices
Everybody needs caching! However, where exactly to place it in your system? Inside your application or as a layer in front of it? Inside or outside the container? In the era of Cloud Native and Microservices these questions get even more complicated. In this session I'll present different architectural patterns for distributed caching: Embedded, Client-Server, (Kubernetes) Sidecar, and Reverse HTTP Proxy Caching. During this talk you'll learn: - What are the design options for including the caching layer - How to apply caching layer in Istio (and Service Mesh in general) - How to use distributed HTTP caching without updating your microservices - Common pitfalls when setting up caching for your system
By Nazarii Cherkas
Low Latency Data Processing in the Era of Serverless
We have entered the Era of Serverless. Do you want to spend time on growing the complex infrastructure and its administration or you focus on validating your hypothesis and delivering what’s needed right here and right now? This is a question you should ask yourself in 2019. Effectiveness is the answer. In this talk, we will see how easily you can build a low-latency processing layer for the Lambda Functions using the Hazelcast Cloud - a fully managed Hazelcast IMDG service. You really want to attend this talk if you already have some background in the Cloud Computing and willing to learn more about Serverless and high-performant state management for the Lambda Functions.