{
  "version": "https://jsonfeed.org/version/1",
  "title": "Kong on LLBBL Blog",
  "icon": "https://avatars.micro.blog/avatars/2023/40/125738.jpg",
  "home_page_url": "https://llbbl.blog/",
  "feed_url": "https://llbbl.blog/feed.json",
  "items": [
      {
        "id": "http://llbbl.micro.blog/2026/04/24/how-kong-actually-works-in.html",
        "title": "How Kong Actually Works in Kubernetes",
        "content_html": "<p>At some point with microservices in Kubernetes, basic Ingress routing stops being enough. Kong is interesting router that I would like to try in the future.</p>\n<p>It&rsquo;s an API Gateway built on top of NGINX and OpenResty. It operates at the infrastructure layer, managing the actual HTTP traffic flowing into your cluster. Drop it into a Kubernetes environment and it acts as an Ingress Controller. It does that job really well.</p>\n<h2 id=\"the-ingress-controller-problem\">The Ingress Controller Problem</h2>\n<p>We should review what an ingress controller is. In case you&rsquo;re familiar, unfamiliar with its job in Kubernetes. An <code>Ingress</code> resource is just a set of routing rules. &ldquo;Send traffic for <code>api.example.com/v1</code> to the <code>user-service</code> pod.&rdquo; Kubernetes doesn&rsquo;t actually route traffic itself. It needs a controller to read those rules and move the packets.</p>\n<p>The Kong Ingress Controller (KIC) runs as a pod inside your cluster. It watches the Kubernetes API server for changes to Ingress resources, Services, and Endpoints. When someone deploys a new app and creates an Ingress rule, KIC picks it up, translates the Kubernetes config into Kong&rsquo;s native format, and reloads the proxy. No manual intervention.</p>\n<h2 id=\"how-traffic-actually-flows\">How Traffic Actually Flows</h2>\n<p>When external traffic hits your cluster, the path looks like this:</p>\n<ol>\n<li><strong>External Load Balancer</strong> forwards traffic to the Kong proxy pods</li>\n<li><strong>Kong evaluates</strong> the incoming request against its routing table (headers, paths, hostnames)</li>\n<li><strong>Plugins execute</strong> before routing, handling cross-cutting concerns at the edge instead of inside your application code</li>\n<li><strong>Upstream routing</strong> sends traffic directly to Pod IPs, bypassing <code>kube-proxy</code> for better performance</li>\n</ol>\n<p>That plugin step is where Kong really earns its keep. Rate limiting, API key auth, mTLS, request transformation. All of that happens at the gateway layer so your services don&rsquo;t have to think about it.</p>\n<h2 id=\"crds-make-it-actually-useful\">CRDs Make It Actually Useful</h2>\n<p>Standard Kubernetes Ingress is pretty limited. Host-based routing, path-based routing, and that&rsquo;s about it. Kong extends this with Custom Resource Definitions:</p>\n<ul>\n<li><strong>KongPlugin</strong> lets you attach behaviors to routes or services. Deploy a manifest to enforce rate limits, require API keys, or add mTLS to a specific endpoint.</li>\n<li><strong>KongConsumer</strong> manages user identities and credentials directly in Kubernetes, so you can tie routing rules or rate limits to specific clients.</li>\n</ul>\n<p>This means your API gateway configuration lives right alongside your application manifests. Version controlled, reviewable, deployable through your normal CI/CD pipeline.</p>\n<h2 id=\"skip-the-database\">Skip the Database</h2>\n<p>Kong used to require PostgreSQL or Cassandra to store its routing config. In modern Kubernetes deployments, you almost always run it in <strong>DB-less mode</strong> instead.</p>\n<p>Why? Kubernetes already has <code>etcd</code> as its source of truth for cluster state. Running a second database just for the API gateway adds overhead and failure modes you don&rsquo;t need. In DB-less mode, Kong stores its configuration entirely in memory. The Ingress Controller reads state from Kubernetes and pushes updates to the proxy dynamically.</p>\n<p>This is one of those decisions that sounds minor but changes everything about how you operate Kong. No database backups to worry about. No schema migrations. Your gateway config is just Kubernetes manifests managed through GitOps.</p>\n<h2 id=\"observability-at-the-edge\">Observability at the Edge</h2>\n<p>Sitting at the edge of the cluster, Kong is perfectly positioned to capture metrics, logs, and traces. With the right plugins, it exports traffic data (latency, status codes, request volumes) directly into whatever observability stack you&rsquo;re running.</p>\n<p>You get visibility across your entire microservice architecture without instrumenting every individual service.</p>\n<hr>\n<p>Kong isn&rsquo;t the only Ingress controller out there, but the combination of plugin architecture, DB-less mode, and CRD-based configuration makes it a solid choice if you need more than basic routing. If you&rsquo;re already running Kubernetes and find yourself writing the same auth and rate-limiting logic across multiple services, moving that to the gateway layer is worth your time.</p>\n<p>I&rsquo;d appreciate a follow. You can subscribe with your email below. The emails go out once a week, or you can find me on Mastodon at <a href=\"https://micro.blog/llbbl?remote_follow=1\">@logan@llbbl.blog</a>.</p>\n",
        "date_published": "2026-04-24T10:00:00-05:00",
        "url": "https://llbbl.blog/2026/04/24/how-kong-actually-works-in.html",
        "tags": ["DevOps","Kubernetes","Kong","Infrastructure"]
      }
  ]
}
