Stepping into the Cloud Native Day Bergen 2025 was a thrilling experience. As a second-year conference that is quickly establishing itself as a must-attend event in the Nordic tech landscape, it was great to be part of the buzz. While I I didn’t attend the hands-on workshops on Day 1, Day 2 delivered a tightly packed schedule of eleven thought-provoking talks that cut across the most critical areas in the modern cloud-native ecosystem.
The agenda I compiled highlighted a clear industry trend: the maturation of cloud-native practices. We’re moving past the initial excitement of “lift and shift” and grappling with the challenges of scale, compliance, developer experience, and cost optimisation. My day was perfectly segmented into four dominant themes: enhancing the developer experience through platforms, fortifying the software supply chain, mastering operational excellence, and, most uniquely, tackling the strategic vision of digital sovereignty in a Norwegian context.
The first core thread weaving through the day focused on the developer experience and the rise of Platform Engineering. This concept has moved from a niche term to a foundational necessity, and the session, Platform Engineering: La utviklere være utviklere (Let Developers Be Developers), provided a powerful manifesto for this shift.
The central tenet is cognitive load. Platform teams are now recognised as the product team whose customers are internal developers. By building Internal Developer Platforms (IDPs) and offering “golden paths (pre-approved, self-service infrastructure templates) organisations can abstract away the complexity of Kubernetes, security policy, and observability setup. The talk emphasised that the goal is to eliminate boilerplate and toil, allowing product teams to focus 100% of their energy on delivering business value, rather than wrestling with YAML files or integrating security tools. This shift not only boosts velocity but is also a critical tool for talent retention, as highlighted by statistics showing a direct correlation between good DX and lower attrition.
Complementing this human-centric approach was the forward-looking session, AI-Driven Developer Experience in the Cloud-Native Ecosystems. This session explored how Generative AI and Machine Learning are becoming integral components of the IDP itself. From AI-powered copilots that generate and debug infrastructure-as-code (IaC) to AIOps tools that automatically triage incidents and suggest remediation steps, AI is set to further simplify the cloud-native journey. The discussion stressed a future where AI handles much of the boilerplate, allowing the Platform Engineer to focus on building the complex, differentiating business logic into the platform, and the product developer to simply consume it.
A significant portion of the day was dedicated to a topic that is becoming increasingly non-negotiable: security and compliance. The talks demonstrated a clear industry consensus that security must be an invisible, automated function of the platform, not a manual gate at the end of the CI/CD pipeline.
Three talks specifically dove into this domain, forming a powerful triptych on modern security posture. Guarding the Supply Chain was a sharp reminder of the lessons learned from recent large-scale security incidents. It detailed the necessity of adopting practices like generating and signing SBOMs (Software Bill of Materials), implementing strong identity for software artifacts using tools like Sigstore, and applying policy-as-code solutions (such as Kyverno or OPA Gatekeeper) to ensure only trusted, validated code can ever reach production. The message was clear: if you cannot prove the provenance of your deployed code, you cannot trust it.
The strategic and regulatory perspective was covered in Pragmatic Guide to Platforms and Compliance. This session served as a bridge between the Platform Engineering discussions and the security domain. It showed how a well-designed IDP naturally enforces compliance. By standardising CI/CD pipelines, resource configurations, and network policies, the platform makes it hard for developers to accidentally violate corporate or regulatory standards. The “pragmatic” part was about prioritising compliance needs based on risk and regulatory requirements (like GDPR or ISO 27001), rather than attempting to achieve a perfect, unworkable security utopia from day one.
Finally, From Paper to Practice: Implementing NIST Cloud Security Guidance offered an operational deep-dive into tackling heavyweight regulatory frameworks. NIST (whether the CSF or SP 800-53) can feel dauntingly abstract. The speaker broke down how to translate specific NIST controls into automated, verifiable code and configuration within a Kubernetes environment. The focus was on automation of auditing, continuous monitoring, and turning security requirements into testable acceptance criteria, transforming compliance from a quarterly audit nightmare into a continuous, observable status.
No cloud-native conference is complete without addressing the operational challenges of running complex systems at scale. These sessions tackled the core mechanics of what it takes to keep the cluster humming.
The day started with a foundational session, CloudNativePG 101, which served as an excellent introduction to managing stateful workloads, specifically PostgreSQL, on Kubernetes. The session detailed how the CloudNativePG operator simplifies challenging operations like provisioning, failover, backup, and point-in-time recovery (PITR). It underscored the argument that modern operators have matured to a point where running a highly-available, production-grade database on Kubernetes is now a preferred, declarative approach rather than a heroic operational feat.
The conversation then shifted from deployment to monitoring with Observability is a Team Sport! This was arguably one of the most culturally insightful talks. It argued that while we have great tools for collecting metrics, logs, and traces, true observability requires a cultural shift. The “team sport” element emphasises cross-functional collaboration. Developers need to instrument their code for business context; SREs need to build shared dashboards, and product managers need to understand how SLOs impact user experience. It’s about ensuring the actionable insights derived from the ‘three pillars’ are used by everyone, thereby fostering a culture of continuous operational improvement and shared ownership.
Operational efficiency was then grounded in cost control with KRO-nicles of Kubernetes: Taming Resources the Open Source Way. This talk dove into Kubernetes Resource Optimisation (KRO), a topic vital in an era of soaring cloud bills. It explored open-source techniques for rightsizing workloads, covering the use of tools like the Vertical Pod Autoscaler (VPA) and custom controllers to dynamically adjust resource requests and limits based on actual usage, thereby reducing waste without sacrificing stability. It was a clear call to action for platform teams to treat cost optimisation as a first-class citizen of their operational strategy.
Finally, the talk APIM to Kong: Lessons from a Real-World Migration provided a necessary dose of reality. In the theoretical world of cloud-native, everything is greenfield. In reality, most work is migration. This session walked through a practical case study, outlining the technical and organisational lessons learned when moving from a legacy API Management (APIM) platform to Kong Gateway. It highlighted the importance of a phased rollout, canary testing for the control and data planes, and meticulous planning to avoid customer impact. A great reminder that even with the best tools, migrations require deep engineering discipline.
The conference saved a strategic and locally relevant discussion for near the end: Building a Sovereign Cloud: What are we missing to solve Norways digital future?
This talk zoomed out to the national level, tackling the geopolitical and technological challenge of ensuring data residency and operational control for critical Norwegian public and private sector data. The discussion was less about a single piece of technology and more about the architectural, regulatory, and skills-gap challenges involved. It addressed the need for open-source solutions, domestic hyperscale capabilities, and regulatory clarity to build a truly sovereign, trustworthy, and competitive cloud infrastructure within Norway’s borders. This session served as a powerful reminder that the technical debates we had throughout the day (Platform Engineering, Supply Chain Security, and Compliance) are the very building blocks required to meet these ambitious national goals.
Walking out of Cloud Native Day Bergen 2025, I was left with a powerful sense of an ecosystem that has truly come of age. The focus has moved from how to use Kubernetes to how to build sustainable, secure, and compliant businesses on top of it.
The major takeaway is synthesis: Platform Engineering is the vehicle, Security and Compliance are the guardrails, and Observability is the dashboard. By automating the mundane and focusing on developer flow, Norwegian companies, and the global community, are well-positioned to tackle the next generation of cloud-native challenges, including the strategic imperative of digital sovereignty. A massive thanks to the organisers and speakers for an exceptionally engaging and technically rich day. I’m already looking forward to seeing how these themes evolve at Cloud Native Day Bergen 2026!