Evaluation Guide

Scalability

Genus is built to scale from small proof-of-concept solutions to large, mission-critical enterprise applications. Its cloud-native microservices architecture, based on Kubernetes, automatically adapts to workload demands in real-time.

Genus scales horizontally and vertically using Kubernetes, Helm, and containerized microservices, optimizing performance, resilience, and resource utilization.

  • Scaling out in Genus increases the number of active microservice instances to handle higher loads via Kubernetes features:

    • Pod Replication: To efficiently distribute workload, additional replicas of a microservice (pod) are deployed. Configure the number of replicas at deployment and adjust at runtime.

    • Autoscaling: Kubernetes Horizontal Pod Autoscaler (HPA) dynamically adjusts the pod count based on resource consumption, optimizing performance under varying loads.

    • Node Scaling: Additional Kubernetes nodes can be added to increase cluster-wide capacity. While pod replication is the primary scaling method, node scaling may be relevant when expanding environments or handling sustained high demand.

    Genus supports deployment across multiple cloud providers (e.g., Azure, AWS, Google Cloud) and on-premises Kubernetes clusters, ensuring flexibility in scaling strategies.

  • Vertical scaling in Genus means increasing CPU and memory for existing microservices to handle increased workloads rather than immediately scaling out by adding pods. Kubernetes supports this through:

    • Adjust CPU and Memory Requests and Limits: Each microservice is assigned a CPU and memory amount, with configurable upper limits. You can allocate more resources to high-demand services.

    • Rescheduling Pods for Optimal Resource Allocation: Kubernetes can reassign pods to nodes with sufficient capacity, optimizing workload distribution when resource requests change.

    • Dynamic Resource Adjustments: Requests and limits can be manually set or automatically adjusted using tools such as Vertical Pod Autoscaler (VPA) to match demand or optimize performance dynamically.

    • Underlying Infrastructure Upgrades: The infrastructure itself can be scaled up if needed. For example, memory can be added to nodes, nodes can be replaced with more powerful instances or network capacity can be upgraded.

    Scaling up is particularly useful when specific microservices require more processing power or memory due to increased usage, without immediately increasing the number of pods or expanding the overall cluster size.

  • Genus supports message queues to enhance scalability by handling workloads asynchronously. Instead of requiring tasks to be processed immediately, message queues allow the system to defer execution until resources are available, preventing bottlenecks and improving overall system responsiveness. This is particularly useful when dealing with high-load scenarios or tasks that do not need to be completed instantly.

    • Offloading the Frontend: By placing requests in a queue instead of processing them synchronously, the system can respond to users immediately while executing tasks in the background. This improves user experience and keeps response times fast, even under heavy load.

    • Modular Business Logic: Pub-Sub queues allow different components to process messages independently, reducing dependencies between services and enabling more efficient parallel execution.

    • Scalability: Decoupling tasks from direct processing allows workloads to be distributed dynamically across multiple instances, complementing horizontal scaling by ensuring that tasks are processed as resources become available.

    Genus includes built-in support for asynchronous message processing using Redis-based internal queues. In addition, it can subscribe to external queueing systems such as Kafka, IBM MQ, or Redis Streams, making it suitable for integration into existing enterprise messaging infrastructures.

  • Genus integrates Redis as an internal caching layer within Kubernetes, enhancing performance by reducing database load and improving response times. Redis operates as a non-persistent database inside the cluster, accelerating frequent data retrieval processes.

  • Genus is designed to handle complex enterprise-scale deployments, including:

    • Multi-Tenant Architecture: Multiple organizations or business units can operate within a single Genus deployment while maintaining strict data and resource isolation.

    • Distributed Deployments: Genus applications can be deployed across multiple Kubernetes clusters or cloud regions to support high availability and disaster recovery scenarios.

    • Integration with Cloud-Native Scaling Tools: Cloud provider-specific scaling mechanisms (such as Azure VMSS or AWS Auto Scaling Groups) can complement Kubernetes' native scaling features.

  • Scaling configurations, such as resource requests and limits, are managed through Helm values and can be adjusted as needed. This applies to both scaling up and out.

    Organizations can monitor resource consumption using tools like Grafana and Prometheus to ensure scaling configurations align with real-world demand.

    Scaling strategies should be continuously evaluated to ensure optimal performance, especially as applications evolve.