Reyk AI - Docs
  • About Reyki AI
  • Customer Benefits
  • Get Free Savings Estimate
    • AWS Users: Savings Estimate
    • GCP Users: Savings Estimate
  • Onboarding
  • 🚀Optimization Features
    • Compute Optimization Engine (COE)
    • Object Storage Optimization (OSO) Engine
    • Kubernetes Optimization Engine (KOE)
    • Block Storage Optimization (BSO) Engine
    • Clouds & Regions
  • 📊FinOps Features
    • Cloud Savings Dashboards
    • Cloud Cost Dashboards
    • Virtual Tagging
    • Budgeting & Monitoring
    • Anomaly Detection & Correction
    • Consolidated Billing
      • AWS User Prerequisite
      • GCP User Prerequisite
      • Azure User Prerequisite
  • 💲Pricing
  • Payment Methods
  • Security & Privacy
    • SSO
    • AWS Users
    • GCP Users
    • Azure Users
  • ❓FAQs
  • Terms of Service
Powered by GitBook
On this page
  1. Optimization Features

Kubernetes Optimization Engine (KOE)

Autonomous Kuberenetes optimization.

PreviousObject Storage Optimization (OSO) EngineNextBlock Storage Optimization (BSO) Engine

Last updated 1 year ago

Reyki AI - Kubernetes Optimization Engine (KOE)

Reyki AI continuously monitors, analyzes, and optimizes your Kubernetes implementation.

Reyki AI streamlines your Kubernetes implementation using these core techniques: workload rightsizing, demand-based scaling, automatic bin packing, auto discount coverage, resource time-to-live policies, and 24/7 real-time utilization monitoring. When harmonized effectively, these methods not only significantly curtail wasteful spending and resource idling but also substantially bolster your application's resilience and stability.

  • Automatically align resource sizes to match actual utilization, requests, and limits to enhance application stability and minimize resource waste.

  • Automatic horizontal scaling adjusts the number of pods according to resource usage.

  • Automatic vertical scaling fine-tunes per-pod sizes based on resource demands.

  • Automatically tune cluster size and resources in response to fluctuating workload schedules; seamlessly ramp up during high-demand periods and scale down during off-peak periods.

  • Automatically compact pods into fewer nodes via optimized resource allocation, thereby freeing up empty nodes for automatic termination and drive significant costs.

  • Automatically calculate and apply unit price discounts based on cloud service provider's spot rates or usage-based discounts to further maximize savings.

  • Automatically set time-to-live policies for detected temporary Kubernetes resources, environments, and deployments to reduce unnecessary zombie costs.

  • Automatic resource utilization monitoring and tuning minimizes costs without customer effort.

🚀
Cover

Auto balance resource utilization, requests, and limits parameters

Cover

Dynamically scale up and down based on workload schedules

Cover

Compact pods into fewer nodes to spin down empty nodes and reduce waste

Cover

Optimize resource coverage by spot or usage based discounts

Cover

Automate K8s time-to-live policies for temporary resources

Cover

Real-time resource monitoring enables expedient utilization tuning