<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Build in the Open on Aleskandro</title><link>https://aleskandro.com/categories/open-innovation/</link><description>Recent content in Build in the Open on Aleskandro</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><copyright>&lt;p xmlns:cc="http://creativecommons.org/ns#" style="margin:0;padding:0">
The content in this blog is licensed under
&lt;a href="https://creativecommons.org/licenses/by-nc-sa/4.0/?ref=chooser-v1" target="_blank" rel="license noopener noreferrer" style="display:inline-block;">
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International&lt;/a>
&lt;/p>
&lt;p style="margin:0;padding:0;margin-top:5px">
&lt;img style="border:none;display:inline-block;padding:0;margin:0" height=40 src="https://aleskandro.com/img/by-nc-sa.png" alt="">
&lt;/p></copyright><lastBuildDate>Fri, 29 Aug 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://aleskandro.com/categories/open-innovation/index.xml" rel="self" type="application/rss+xml"/><item><title>Bridging Distributed Systems with Earth &amp; Sea Science: A Field-Driven Reading List</title><link>https://aleskandro.com/posts/climate-change-distributed-systems-journals-research/</link><pubDate>Fri, 29 Aug 2025 00:00:00 +0000</pubDate><guid>https://aleskandro.com/posts/climate-change-distributed-systems-journals-research/</guid><description>&lt;p>I&amp;rsquo;m just back from a 10-days road-trip in Scotland. It was a perfect break from work and a wonderful experience with Courtney and Justin.
It&amp;rsquo;s now official that I&amp;rsquo;m going to move out of my current team: my multi-arch compute Kubernetes chapter comes to an official end, and a new experience around LLM distributed serving will have me busy at Red Hat within a few weeks.&lt;/p>
&lt;p>But it wasn’t just a break-from-work trip - it was a reminder of what&amp;rsquo;s at stake.&lt;/p></description></item><item><title>KubeCon EU 2025 Recap: Kubernetes Meets AI - a New Decade of Cloud Native development</title><link>https://aleskandro.com/posts/kubecon-2025-london-recap-scheduling-autoscaling-ai-workloads-orchestration/</link><pubDate>Mon, 07 Apr 2025 00:00:00 +0000</pubDate><guid>https://aleskandro.com/posts/kubecon-2025-london-recap-scheduling-autoscaling-ai-workloads-orchestration/</guid><description>&lt;p>KubeCon + CloudNativeCon Europe 2025 just concluded in London, bringing together thousands of cloud-native engineers, maintainers, and enthusiasts. As a local Distributed Systems Engineer involved in the Kubernetes community and ecosystem, attending in person was both energizing and insightful.&lt;/p>
&lt;p>A central theme emerged across sessions: Kubernetes is rapidly evolving beyond microservices, adapting to support batch workloads, AI/ML training, HPC scenarios, and global-scale multi-cluster deployments. This shift isn&amp;rsquo;t just technical - it&amp;rsquo;s reshaping the cloud-native landscape and redefining how we think about workload orchestration, scheduling, and autoscaling.&lt;/p></description></item><item><title>A perspective on the current and future state of Kubernetes scheduling</title><link>https://aleskandro.com/posts/kubernetes-scheduler-plugins-custom-schedulers-future/</link><pubDate>Sat, 01 Mar 2025 00:19:02 +0200</pubDate><guid>https://aleskandro.com/posts/kubernetes-scheduler-plugins-custom-schedulers-future/</guid><description>&lt;p>Kubernetes scheduling is the brain of the cluster, deciding which node runs each Pod. In this post, I explore advanced scheduling mechanisms in Kubernetes. I start with how the default scheduler works under the hood, then dive into the scheduler-plugins project for extending its capabilities. I review custom schedulers and plugins in the ecosystem that focus on cost savings, SLA optimization, and performance tuning, noting which are community-supported or vendor-specific. Finally, I look at future trends in Kubernetes scheduling, from AI-driven algorithms to multi-cluster and energy-aware schedulers. This post brings together the key ideas from the full series I wrote on Kubernetes scheduling.&lt;/p></description></item><item><title>FOSDEM 2025 Recap</title><link>https://aleskandro.com/posts/fosdem-2025-recap/</link><pubDate>Sat, 08 Feb 2025 00:19:03 +0200</pubDate><guid>https://aleskandro.com/posts/fosdem-2025-recap/</guid><description>&lt;p>&lt;a href="https://fosdem.org/2025/">FOSDEM 2025&lt;/a>, one of the largest open-source conferences globally, concluded last week, featuring an array of talks, workshops, and community events. Held annually in Brussels, Belgium, FOSDEM gathers developers, contributors, and enthusiasts from around the world to explore the latest trends, projects, and innovations in the open-source ecosystem.&lt;/p>
&lt;p>Having missed the in-person event in 2024 due to my relocation to London, I was excited to attend FOSDEM 2025 in person and reconnect with the vibrant open-source community. This year, I boarded the Eurostar from London to Brussels, eager to engage with fellow contributors, maintainers, and developers.&lt;/p></description></item><item><title>Kubernetes Scheduling: Future Trends</title><link>https://aleskandro.com/posts/kubernetes-scheduler-p5-future-trends-scheduling-autoscaling/</link><pubDate>Sat, 01 Feb 2025 00:00:00 +0000</pubDate><guid>https://aleskandro.com/posts/kubernetes-scheduler-p5-future-trends-scheduling-autoscaling/</guid><description>&lt;p>The Kubernetes scheduler is the brain of the cluster, deciding which node runs each Pod. This is the final post in a series where I explore advanced scheduling mechanisms in Kubernetes. In this one, I look ahead at emerging trends and research directions that could shape the future of Kubernetes scheduling.
I discuss how new heuristics and processing architectures, AI and machine learning could drive smarter placement decisions, how multi-cluster and federated schedulers might support global workloads, and how energy-aware scheduling could make Kubernetes more sustainable. I also explore upcoming ideas like dynamic scheduler profiles, carbon-aware policies, and new architectures for resilience and scalability. From smarter algorithms to environmental impact, Kubernetes scheduling is evolving into a platform for innovation - and the road ahead looks promising.&lt;/p></description></item><item><title>Kubernetes Scheduling: community schedulers in the ecosystem</title><link>https://aleskandro.com/posts/kubernetes-scheduler-p4-custom-plugins-ecosystem/</link><pubDate>Sun, 05 Jan 2025 00:00:00 +0000</pubDate><guid>https://aleskandro.com/posts/kubernetes-scheduler-p4-custom-plugins-ecosystem/</guid><description>&lt;p>The Kubernetes scheduler is the brain of the cluster, deciding which node runs each Pod. This is the fourth post in a series where I explore advanced scheduling mechanisms in Kubernetes. In this one, I give a broad overview of community-driven and vendor-supported custom schedulers built on top of Kubernetes. I focus on how these schedulers and plugins target specific goals like cost savings, SLA optimization, and performance tuning. I cover batch and ML-focused schedulers like Volcano, YuniKorn, and Koordinator, as well as research-driven systems like Poseidon/Firmament. I also look at cost-optimization strategies using bin-packing, spot instances, and descheduling, along with SLA-driven and topology-aware scheduling techniques. Finally, I reflect on the balance between community projects and vendor platforms, and how Kubernetes’s extensibility allows users to tailor scheduling to their workload and infrastructure needs.&lt;/p></description></item><item><title>Kubernetes Scheduling: the scheduler-plugins project</title><link>https://aleskandro.com/posts/kubernetes-scheduler-p3-plugins/</link><pubDate>Sat, 21 Dec 2024 00:00:00 +0000</pubDate><guid>https://aleskandro.com/posts/kubernetes-scheduler-p3-plugins/</guid><description>&lt;p>The Kubernetes scheduler is the brain of the cluster, deciding which node runs each Pod. This is the third post in a series where I explore advanced scheduling mechanisms in Kubernetes. In this one, I focus on the scheduler-plugins project by SIG Scheduling. I explain how this project extends the Kubernetes Scheduling Framework with a collection of out-of-tree plugins that enable advanced behaviors like gang scheduling, NUMA-aware placement, load-aware scoring, and more. I walk through key plugins such as Capacity Scheduling, Coscheduling, Trimaran, and Network-Aware Scheduling, and show how they solve real-world scheduling problems. I also cover how to integrate these plugins into your cluster using a custom scheduler or as a secondary scheduler, and discuss the tradeoffs of each approach.&lt;/p></description></item><item><title>Kubernetes Scheduling: the scheduling framework</title><link>https://aleskandro.com/posts/kubernetes-scheduler-p2-scheduling-framework/</link><pubDate>Sat, 16 Nov 2024 00:00:00 +0000</pubDate><guid>https://aleskandro.com/posts/kubernetes-scheduler-p2-scheduling-framework/</guid><description>&lt;p>The Kubernetes scheduler is the brain of the cluster, deciding which node runs each Pod. This is the second post in a series where I explore advanced scheduling mechanisms in Kubernetes. In this one, I give an overview of the current state of the Kubernetes scheduling framework.
I explain how Kubernetes scheduling works as a batch-oriented process that handles one Pod at a time. I walk through the evolution from the older predicates and priorities model to the modern Scheduling Framework, where each step in the scheduling cycle is an extension point for plugins. I also cover extenders, PreEnqueue plugins, and SchedulingGates, which enable more flexible and complex scheduling workflows. Finally, I highlight projects like Kueue and the Multiarch Tuning Operator that build on these features to support AI, HPC, and multi-architecture workloads.&lt;/p></description></item><item><title>Kubernetes Scheduling: Under the Hood</title><link>https://aleskandro.com/posts/kubernetes-scheduler-p1-under-the-hood/</link><pubDate>Thu, 10 Oct 2024 00:00:00 +0000</pubDate><guid>https://aleskandro.com/posts/kubernetes-scheduler-p1-under-the-hood/</guid><description>&lt;p>The Kubernetes scheduler is the brain of the cluster, deciding which node runs each Pod. This post is part of a series where I explore advanced scheduling mechanisms in Kubernetes. In this part, I focus on the current state of the default scheduler.&lt;/p>
&lt;p>I walk through how the default scheduler works under the hood, breaking it down into queueing, filtering, scoring, binding, and preemption. I explain how Pods move through different queues, how the scheduler picks viable nodes, and how it scores and selects the best one. I also touch on newer features like QueueingHint and PreEnqueue plugins, and discuss how the scheduler balances performance and fairness at scale. If you&amp;rsquo;re curious about how Kubernetes makes scheduling decisions, this post offers a detailed look at the mechanisms behind it.&lt;/p></description></item><item><title>A journey to an OSTree container native distributed desktop configuration system with Fedora Silverblue/Kinoite</title><link>https://aleskandro.com/posts/rpm-ostree-container-native-fedora-silverblue-kinoite-dual-boot/</link><pubDate>Mon, 12 Dec 2022 00:19:02 +0200</pubDate><guid>https://aleskandro.com/posts/rpm-ostree-container-native-fedora-silverblue-kinoite-dual-boot/</guid><description>&lt;div class="no-text">
&lt;pre class="mermaid">
flowchart TD
A("cron-like schedule")
B("manual triggers")
C("push")
D["Build BaseOS Image"]
E["Build OSContent Image"]
A --> D
B --> D
B --> E
C --> E
D --> E
&lt;/pre>
&lt;/div>
&lt;p>The traditional desktop delivery model is based on a large number of distributed PCs executing the operating system and the desktop applications.
Managing traditional desktop environments is incredibly challenging and expensive.
Tasks like installations, configuration changes, and security measures require time-consuming procedures and dedicated deskside support.
Users in the Free and Open Source Software (FOSS) community have always been keen to implement strategies for the delivery of the configuration of their systems so that they (1) can share it with the community, and (2) distribute it across multiple workstations or servers. Dotfiles and Ansible repositories are usual buzzwords for those users to respectively version specific user configurations, deliver initial system-level setup to freshly installed OSes and update existing ones. Alternatively, and especially in desktop enterprise environments leveraging multi-user desktop configuration, the delivery of the user-specific configuration can be managed through technologies like Kerberos, LDAP and Active Directory. However, the latter still needs a specific centralized model that is either not always doable and optimal or even over-killing for small configurations like single-user workstations.&lt;/p></description></item></channel></rss>