<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Mike Davies Technology]]></title><description><![CDATA[Technology thoughts, stories, and ideas.]]></description><link>https://mkdavies.com/</link><generator>Ghost 5.70</generator><lastBuildDate>Mon, 13 Apr 2026 13:11:08 GMT</lastBuildDate><atom:link href="https://mkdavies.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Self Hosting With an Enterprise-Level Approach]]></title><description><![CDATA[Put your heart into doing the little things right.]]></description><link>https://mkdavies.com/self-hosting-with-an-enterprise-level-approach/</link><guid isPermaLink="false">66fc2e19f16f470001cb1c74</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Tue, 01 Oct 2024 14:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1666875753105-c63a6f3bdc86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fGRhdGElMjBhbmFseXNpc3xlbnwwfHx8fDE3Mjc4MDQ0Njh8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1666875753105-c63a6f3bdc86?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDJ8fGRhdGElMjBhbmFseXNpc3xlbnwwfHx8fDE3Mjc4MDQ0Njh8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Self Hosting With an Enterprise-Level Approach"><p>As someone who thrives on tech challenges, I embarked last year on a meaningful project: helping the board of directors for my local little league make better, data-driven decisions about participation. You might wonder why a youth sports league needs data analysis, but in today&#x2019;s world, data is the backbone of smart decision-making, even in community activities. Whether it&#x2019;s understanding which neighborhoods have the most sign-ups or tracking participation by age group, this data helps guide important choices. And since data is as valuable here as it is in any enterprise, I knew I had to treat the project with that same level of care and professionalism.</p><h2 id="from-a-virtualization-playground-to-a-new-superset-vm">From a Virtualization Playground to a New Superset VM</h2><p>I&#x2019;ve always viewed my home lab as a serious environment for experimentation. Sure, I test new things for fun, but the approach is always professional&#x2014;after all, what&#x2019;s the point of learning if you&#x2019;re not doing it right? <a href="https://mkdavies.com/my-staycation-with-proxmox-sun-silence-and-servers/" rel="noreferrer">My Proxmox server</a> has become a foundational part of my home lab, providing a reliable platform to host virtual machines and containers, mimicking enterprise-level infrastructure.</p><p>This time, however, I had a very real and community-driven need: the little league&#x2019;s board was swimming in data but didn&#x2019;t have the right tools to analyze it. They were using spreadsheets, and while functional, those tools couldn&#x2019;t provide deep insights. I knew the answer lay in <a href="https://superset.apache.org/?ref=mkdavies.com" rel="noreferrer">Apache Superset</a>, a powerful open-source data visualization tool.</p><p>Setting up a Debian VM on Proxmox was the first step. Using the same care I would apply to an enterprise system, I ensured that the virtual machine was efficient, stable, and configured for long-term use. The focus wasn&#x2019;t just getting something running&#x2014;I was building a production-grade environment. Superset needed to be easy to manage for the board and scalable as their data grew. I followed best practices, ensuring that security settings, resource management, and network configurations were set up properly. After all, just because it&#x2019;s self-hosted doesn&#x2019;t mean it should be treated casually.</p><p>With a little data and some dashboards, our Superset site was showing value.</p><figure class="kg-card kg-image-card"><img src="https://mkdavies.com/content/images/2024/10/image-1.png" class="kg-image" alt="Self Hosting With an Enterprise-Level Approach" loading="lazy" width="2000" height="645" srcset="https://mkdavies.com/content/images/size/w600/2024/10/image-1.png 600w, https://mkdavies.com/content/images/size/w1000/2024/10/image-1.png 1000w, https://mkdavies.com/content/images/size/w1600/2024/10/image-1.png 1600w, https://mkdavies.com/content/images/size/w2400/2024/10/image-1.png 2400w" sizes="(min-width: 720px) 720px"></figure><h2 id="data-safety-no-shortcuts-on-backup-strategy">Data Safety: No Shortcuts on Backup Strategy</h2><p>With Superset up and running, I turned my attention to a crucial aspect of any data project: backup and disaster recovery. Whether you&#x2019;re running a home lab or managing an enterprise system, data loss is catastrophic. In the little league&#x2019;s case, the PostgreSQL database behind Superset contained valuable participation data, the kind of insights that shaped decisions about everything from equipment to league schedules.</p><p>I knew I needed to approach backups with the same seriousness as I would in any business setting. That meant no shortcuts, no &#x201C;good enough&#x201D; solutions. The database needed to be backed up regularly, securely, and in a way that was easy to restore.</p><p>Here&#x2019;s where my backup options got real:</p><ul><li>Proxmox Snapshots: This was my first thought. Proxmox makes it easy to take VM snapshots, which capture the entire system state. But, like in enterprise environments, I knew this wasn&#x2019;t the ideal solution. Snapshots are heavy, resource-intensive, and not ideal for granular database backups.</li><li>pg_dump: PostgreSQL&#x2019;s logical backup tool was a more targeted solution. It allows for backups of just the database, making them smaller, faster to execute, and easier to restore. This was exactly what I needed for nightly backups of the participation data.</li><li>Third-party cloud services like AWS S3 or Google Cloud Storage were tempting. But I&#x2019;m a firm believer in self-reliance. Why hand over control of my backups when I had the hardware to handle it myself?</li></ul><p>After some research I turned to <a href="https://min.io/?ref=mkdavies.com" rel="noreferrer">MinIO</a>, an open-source, S3-compatible object storage platform that I run on my NAS, separate from my Proxmox server. In an enterprise, you wouldn&#x2019;t store critical backups on the same machine as the production system&#x2014;that&#x2019;s asking for trouble. So why do it in my home lab? Just as in a professional setup, separating the hardware for backups adds resilience. If something goes wrong on the Proxmox server, my backups remain safe and sound on a separate machine.</p><figure class="kg-card kg-image-card"><img src="https://mkdavies.com/content/images/2024/10/image-2.png" class="kg-image" alt="Self Hosting With an Enterprise-Level Approach" loading="lazy" width="2000" height="790" srcset="https://mkdavies.com/content/images/size/w600/2024/10/image-2.png 600w, https://mkdavies.com/content/images/size/w1000/2024/10/image-2.png 1000w, https://mkdavies.com/content/images/size/w1600/2024/10/image-2.png 1600w, https://mkdavies.com/content/images/size/w2400/2024/10/image-2.png 2400w" sizes="(min-width: 720px) 720px"></figure><p>MinIO&#x2019;s enterprise-grade capabilities, like S3 API compatibility and scalability, made it the perfect solution. I could configure it just like AWS S3, but without the recurring costs or external dependencies. And because it&#x2019;s running on my NAS, I&#x2019;m not tied to a third-party provider&#x2014;I maintain complete control over my data. In terms of security and performance, MinIO on my NAS gives me the same confidence I&#x2019;d expect from a cloud provider, but with the added benefit of local control.</p><p>The importance of redundancy and scalability in data storage can&#x2019;t be overstated, whether you&#x2019;re dealing with millions of transactions in a business or hundreds of players in a youth league. MinIO gave me the ability to scale as the little league&#x2019;s data grows, ensuring that storage never becomes a bottleneck.</p><h2 id="automating-with-n8n-bringing-workflow-automation-to-backups">Automating with n8n: Bringing Workflow Automation to Backups</h2><p>Once I had MinIO set up, I needed a reliable, automated process to ensure that backups were happening regularly and without fail. In enterprise environments, automation is key to reducing human error and improving reliability. My home lab follows the same principles.</p><p>For this, I used n8n, a powerful open-source workflow automation tool that I also host on my home infrastructure. n8n is like the glue that holds my self-hosted services together, providing a visual interface to build workflows that automate repetitive tasks. In this case, that meant orchestrating database backups.</p><p>Here&#x2019;s how I set up the workflow:</p><figure class="kg-card kg-image-card"><img src="https://mkdavies.com/content/images/2024/10/image.png" class="kg-image" alt="Self Hosting With an Enterprise-Level Approach" loading="lazy" width="1987" height="901" srcset="https://mkdavies.com/content/images/size/w600/2024/10/image.png 600w, https://mkdavies.com/content/images/size/w1000/2024/10/image.png 1000w, https://mkdavies.com/content/images/size/w1600/2024/10/image.png 1600w, https://mkdavies.com/content/images/2024/10/image.png 1987w" sizes="(min-width: 720px) 720px"></figure><ol><li>Scheduled trigger: Every hour, n8n kicks off the workflow to create a fresh database backup. This trigger acts like the enterprise-grade job scheduling tools used in businesses to run similar processes.</li><li>pg_dump execution: The workflow uses pg_dump to take a logical backup of the PostgreSQL database that powers Superset as well as any other databases. Just like in enterprise-grade databases, consistency and data integrity are critical, so I configured the dump to run in a way that ensures a clean, reliable backup.</li><li>Upload to MinIO: n8n then pulls the files into the workflow workspace before uploading the backup file to my MinIO bucket on the NAS using the S3-compatible API. By separating the storage location from the primary infrastructure, I&#x2019;ve effectively created a resilient and distributed backup system, a best practice in enterprise IT.</li><li>Cleanup: Leftover backup files can cause issues if not managed, so after the upload is successful the backup files on the server are removed so the VM disk space is not compromised.</li><li>Notification: Signal to noise is an important component to manage, so n8n sends a notification to my existing gotify service when things fail instead of when the backup is complete, the result being I&apos;m only notified when there is action to be taken. This acts much like the alerts you&#x2019;d expect from enterprise systems, and I tested and verified this extensively, giving me full confidence that the workflow regularly runs smoothly.</li></ol><h2 id="why-treating-self-hosting-like-an-enterprise-system-matters">Why Treating Self-Hosting Like an Enterprise System Matters</h2><p>You might wonder why I go to such lengths in a home lab setup. The answer is simple: self-hosted environments deserve the same level of professionalism as enterprise systems. Just because it&#x2019;s running in my home doesn&#x2019;t mean it shouldn&#x2019;t be resilient, secure, and automated. In fact, that&#x2019;s precisely why I enjoy this work&#x2014;there&#x2019;s a satisfaction in knowing that I&#x2019;ve built a production-grade solution on my own terms, without cutting corners.</p><p>Whether it&#x2019;s the virtualization power of Proxmox, the data visualization capabilities of Apache Superset, or the resilient backup storage provided by MinIO and n8n, every piece of this setup is designed with longevity and reliability in mind. I&#x2019;m not just throwing together tools to see what sticks&#x2014;I&#x2019;m following best practices that mirror the processes I&#x2019;d use in any professional environment. As a result, the little league board now has the insights they need, and I have the confidence that the entire system is as solid as any professional-grade setup.</p><p>The key takeaway? No matter the scale, whether it&#x2019;s a local sports league or a major enterprise, best practices apply everywhere. When you treat self-hosting with the same level of care as an enterprise system, you ensure that your infrastructure is built to last.</p>]]></content:encoded></item><item><title><![CDATA[The Beatles' Magical Mystery Time Machine]]></title><description><![CDATA[70's Music Meets 2023 Technology]]></description><link>https://mkdavies.com/the-beatles-magical-mystery-time-machine/</link><guid isPermaLink="false">6543d07e92643e000192cec1</guid><category><![CDATA[AI]]></category><category><![CDATA[The Beatles]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Thu, 02 Nov 2023 14:00:00 GMT</pubDate><media:content url="https://mkdavies.com/content/images/2023/11/now_and_then.png" medium="image"/><content:encoded><![CDATA[<img src="https://mkdavies.com/content/images/2023/11/now_and_then.png" alt="The Beatles&apos; Magical Mystery Time Machine"><p>While I typically reserve this space for discussions on technology and its applications, something rather extraordinary has caught my attention this week. It&apos;s the release of the &quot;new&quot; and &quot;final&quot; Beatles song, <a href="https://www.thebeatles.com/announcement?ref=mkdavies.com">&quot;Now and Then&quot;</a>, which made its debut on the very morning of this post.</p><p><a href="https://www.youtube.com/watch?v=AW55J2zE3N4&amp;ref=mkdavies.com">&quot;Now and Then&quot;</a> is not merely a track; it&apos;s a time capsule, a resurrection. It started as a John Lennon demo and has now evolved into a full Beatles track, reminiscent of the 1995 releases of &quot;Free as a Bird&quot; and &quot;Real Love.&quot; Originally intended to be the third addition alongside them during the &quot;Beatles Anthology&quot; project, its genesis has been extensively covered, with stories and details available across various platforms, including <a href="https://www.youtube.com/watch?v=APJAQoSCwuA&amp;ref=mkdavies.com">The Beatles&apos; very own YouTube channel</a>.</p><p>However, the narrative around &quot;Now and Then&quot; doesn&apos;t stop at its creation. What&apos;s truly riveting is the conversation it ignites about what comes next.</p><p>Maybe you have already encountered the AI-rendered wonders of <a href="https://arstechnica.com/information-technology/2023/08/hear-elvis-sing-baby-got-back-using-ai-and-learn-how-it-was-made/?ref=mkdavies.com">Elvis singing &quot;Baby Got Back&quot;</a> or <a href="https://www.youtube.com/shorts/lcBZ0laQ41c?ref=mkdavies.com">Johnny Cash performing Taylor Swift&apos;s &quot;Blank Space&quot;</a>. Some may dismiss these as AI novelties, far removed from the true impact of technology on the music industry. Yet, I&apos;m inclined to echo a Beatles lyric here: &quot;You may say I&apos;m a dreamer, but I&apos;m not the only one.&quot; Art and technology have been intertwined since time immemorial. Whether it&apos;s splicing tape for new sounds, synthesizers for futuristic tones, or autotune for pitch-perfect vocals, innovation has always been at art&apos;s core. The difference now? AI allows for an organic and natural evolution of sounds.</p><p>So, where do you stand in this debate? Is AI-generated art a misstep, or is it simply another milestone in the long-standing relationship between art and technology? What about when an AI-generated &quot;Band&quot; produces a chart-topping hit, derived from a century&apos;s worth of songwriting and performing data?</p><p>One thing is certain: the prospect is exhilarating. The opportunity to witness this blend of technology and creativity, to possibly experience a song as stirring as &quot;Now and Then&quot; created with AI, fills me with anticipation. The Beatles have once again, even indirectly, pushed us to ponder the future of music and art. And that, in itself, is something to celebrate.</p>]]></content:encoded></item><item><title><![CDATA[Security Benefits of Egress Gateways in Kubernetes Clusters]]></title><description><![CDATA[How to use Istio egress to protect yourself]]></description><link>https://mkdavies.com/security-benefits-of-egress-gateways-in-kubernetes-clusters/</link><guid isPermaLink="false">653931c892643e000192ce5b</guid><category><![CDATA[DevOps]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Networking]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Wed, 25 Oct 2023 14:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1496368077930-c1e31b4e5b44?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fHNlY3VyaXR5fGVufDB8fHx8MTY5ODI1MDM4MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://images.unsplash.com/photo-1496368077930-c1e31b4e5b44?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fHNlY3VyaXR5fGVufDB8fHx8MTY5ODI1MDM4MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Security Benefits of Egress Gateways in Kubernetes Clusters"><p>Kubernetes has become the de facto standard for orchestrating containerized applications, but with great power comes great responsibility&#x2014;especially when it comes to security. One often-overlooked aspect of Kubernetes security is controlling the outbound traffic from a cluster, commonly known as egress traffic. While ingress controllers are widely used for managing incoming traffic, egress gateways are not as commonly implemented but offer several security advantages. In this article, we&apos;ll explore the security benefits of egress gateways in Kubernetes clusters and walk through a guide on how to implement them.</p><h2 id="why-egress-gateways">Why Egress Gateways?</h2><p>Before we dive into the benefits, let&apos;s understand what an egress gateway is. An egress gateway is a dedicated point in a Kubernetes cluster through which all external service calls pass. Essentially, it&apos;s a way to manage and secure traffic leaving your cluster.</p><h3 id="security-benefits">Security Benefits</h3><ol><li><strong>Traffic Control</strong>: Egress gateways allow you to control which external services your pods can access, ensuring that unauthorized or harmful destinations are blocked.</li><li><strong>Logging and Monitoring</strong>: Centralizing the exit point for all outbound traffic makes it easier to log and monitor these connections, which is useful for auditing and identifying suspicious activity.</li><li><strong>Compliance</strong>: Regulatory frameworks often require detailed control and logging of network traffic; egress gateways can make it easier to meet these requirements.</li><li><strong>Enhanced Firewall Rules</strong>: Since all outbound traffic goes through a known point, you can apply firewall rules more effectively.</li><li><strong>Data Leakage Prevention</strong>: By inspecting the data that&apos;s leaving your network, egress gateways can identify and block potentially sensitive information from being transmitted.</li><li><strong>Network Policy Simplification</strong>: Egress gateways can simplify the task of writing network policies, as you only have to manage the rules for the gateway rather than for each individual pod.</li></ol><h2 id="how-to-implement-egress-gateways-in-kubernetes">How to Implement Egress Gateways in Kubernetes</h2><p>Istio is my service mesh of choice for Kubernetes. Below is a step-by-step guide to implementing an egress gateway in a Kubernetes cluster.</p><h3 id="prerequisites">Prerequisites</h3><ul><li>A running Kubernetes cluster</li><li><code>kubectl</code> installed and configured</li><li>Istio service mesh installed</li></ul><h3 id="step-1-enable-egress-gateway">Step 1: Enable Egress Gateway</h3><p>Enable Istio&apos;s egress gateway by editing the <code>IstioOperator</code> custom resource.</p><pre><code class="language-yaml">apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  components:
    egressGateways:
    - name: istio-egressgateway
      enabled: true
</code></pre><h3 id="step-2-create-a-service-entry">Step 2: Create a Service Entry</h3><p>When you&apos;re using an egress gateway, the <code>ServiceEntry</code> specifies which external services the cluster is allowed to access through the egress gateway. Without a <code>ServiceEntry</code>, the egress gateway wouldn&apos;t know which external domains or IPs it should allow traffic to, and you wouldn&apos;t be able to apply Istio policies to that outbound traffic.</p><p>Define a <code>ServiceEntry</code> to specify the external services that your cluster can access.</p><pre><code class="language-yaml">apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: external-svc
spec:
  hosts:
  - example.com
  ports:
  - number: 80
    name: http
    protocol: HTTP
</code></pre><p>This <code>ServiceEntry</code> defines that the external service with the hostname <code>example.com</code> is allowed to be accessed over HTTP on port 80. Traffic destined for this host will be allowed to pass through the egress gateway, and Istio will also apply any other configurations or policies that you&apos;ve set up for this host.</p><h3 id="step-3-configure-the-egress-gateway">Step 3: Configure the Egress Gateway</h3><p>The <code>Gateway</code> resource in Istio serves as a load balancer that handles incoming and outgoing HTTP/TCP connections. It configures exposed ports, the protocol to use, and other options like TLS settings. In the context of egress traffic in a Kubernetes cluster managed by Istio, the <code>Gateway</code> resource specifically configures the egress gateway, essentially defining how outbound traffic should be handled at the edge of the service mesh before it leaves the cluster.</p><p>The key roles of the <code>Gateway</code> resource:</p><ol><li><strong>Port Configuration</strong>: Specifies which ports are open on the egress gateway and how they should handle traffic. This includes setting the protocol (HTTP, HTTPS, TCP, etc.).</li><li><strong>Traffic Routing</strong>: While the <code>Gateway</code> itself doesn&apos;t define the traffic routing rules, it serves as a reference for <code>VirtualService</code> resources that do. The <code>VirtualService</code> specifies how traffic that enters a gateway should be routed within the mesh.</li><li><strong>Security</strong>: You can configure TLS settings for secure traffic handling.</li><li><strong>Selector</strong>: Defines which workloads (usually pods) in the cluster will act as the gateway. This is typically based on labels.</li></ol><pre><code class="language-yaml"># Egress Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-egressgateway
spec:
  selector:
    istio: egressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP</code></pre><p>Here, this <code>Gateway</code> resource does the following:</p><ul><li>Selects the Istio egress gateway for configuration (<code>istio: egressgateway</code> label).</li><li>Opens port 80 and sets the protocol to HTTP.</li></ul><p>The <code>Gateway</code> resource often works in conjunction with a <code>VirtualService</code> resource to define the complete traffic routing logic. The <code>VirtualService</code> binds itself to a <code>gateway</code> using the <code>gateways</code> field and defines how the traffic that enters the gateway should be routed.</p><pre><code class="language-yaml"># Virtual Service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: direct-external-svc-through-egress-gateway
spec:
  hosts:
  - example.com
  gateways:
  - istio-egressgateway
  http:
  - match:
    - gateways:
      - mesh
      port: 80
    route:
    - destination:
        host: istio-egressgateway.istio-system.svc.cluster.local
        port:
          number: 80</code></pre><p>In this example, the <code>VirtualService</code> is saying that traffic aiming to reach <code>example.com</code> that comes through the Istio egress gateway (<code>istio-egressgateway</code>) should be routed to the egress gateway service (<code>istio-egressgateway.istio-system.svc.cluster.local</code>) on port 80.</p><p>By combining the <code>Gateway</code> and <code>VirtualService</code> resources, you have fine-grained control over how egress traffic leaves your Kubernetes cluster, thereby enabling better security and routing capabilities.</p><h3 id="step-4-apply-network-policies">Step 4: Apply Network Policies</h3><p>The <code>NetworkPolicy</code> resource in Kubernetes is used to define how pods are allowed to communicate with various network endpoints, including other pods, services, and external hosts. Network policies play a crucial role in controlling the networking behavior in a Kubernetes cluster and are vital for implementing security best practices.</p><p>The key roles of the <code>NetworkPolicy</code> resource:</p><ol><li><strong>Ingress Control</strong>: You can define which inbound connections to a pod are allowed.</li><li><strong>Egress Control</strong>: You can specify which outbound connections from a pod are allowed.</li><li><strong>Pod Selector</strong>: Uses labels to select which pods the network policy applies to.</li><li><strong>Policy Types</strong>: Specifies if the policy applies to ingress, egress, or both.</li><li><strong>IP Blocks</strong>: Allows you to specify CIDR ranges to whitelist or blacklist, giving you control over traffic based on IP ranges.</li></ol><p>In the context of egress gateways in a Kubernetes cluster managed with Istio, a <code>NetworkPolicy</code> can be used to ensure that egress traffic from pods in the cluster only goes through the egress gateway. This is an additional security measure to make sure that no pod can bypass the egress gateway and communicate directly with external services, thereby circumventing security policies, logging, or auditing features that you&apos;ve configured.</p><p>For example, consider the following <code>NetworkPolicy</code>:</p><pre><code class="language-YAML">apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-gateway-policy
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          istio: egressgateway
</code></pre><p>This <code>NetworkPolicy</code> does the following:</p><ul><li>It applies to all pods in the namespace (<code>podSelector: {}</code> implies that the policy matches all pods).</li><li>It specifies that this is an Egress policy (<code>policyTypes: - Egress</code>).</li><li>It allows egress traffic only to pods that have the label <code>istio: egressgateway</code>.</li></ul><p>By applying this <code>NetworkPolicy</code>, you effectively force all egress traffic to go through the Istio egress gateway, thereby gaining all the security and auditing benefits that come with it.</p><h3 id="step-5-data-leakage-protection">Step 5: Data Leakage Protection</h3><p>While <code>NetworkPolicy</code> is focused on Layer 3 and Layer 4 network communication control, it doesn&apos;t offer native capabilities to filter traffic based on the content of the packets, such as sensitive data. Here is where an <code>EnvoyFilter</code> can be used to block sensitive data from being transmitted out of your Kubernetes cluster.</p><p>To block HTTP requests containing a payload with a &quot;sensitiveData&quot; key, you can use an Envoy filter with Lua scripting in Istio. The Lua script will check each HTTP request to see if it contains the &quot;sensitiveData&quot; key in the body. If so, it will reject the request and return a 400 Bad Request status code.</p><pre><code class="language-yaml">apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: block-sensitive-data
  namespace: istio-system
spec:
  workloadSelector:
    labels:
      istio: egressgateway
  configPatches:
  - applyTo: HTTP_FILTER
    match:
      context: GATEWAY
      listener:
        filterChain:
          filter:
            name: &quot;envoy.filters.network.http_connection_manager&quot;
    patch:
      operation: INSERT_BEFORE
      value:
        name: envoy.lua
        typed_config:
          &quot;@type&quot;: &quot;type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua&quot;
          inlineCode: |
            function envoy_on_request(request_handle)
              local headers, body = request_handle:request_headers(), request_handle:body()
              if headers:get(&quot;:method&quot;) == &quot;POST&quot; then
                if body ~= nil then
                  local body_str = body:getBytes(0, body:length())
                  local is_sensitive = string.match(body_str, &quot;\&quot;sensitiveData\&quot;&quot;)
                  if is_sensitive then
                    request_handle:respond(
                      {[&quot;:status&quot;] = &quot;400&quot;, [&quot;content-type&quot;] = &quot;text/plain&quot;},
                      &quot;Bad Request: Posting sensitive data is not allowed.&quot;
                    )
                  end
                end
              end
            end
</code></pre><p>This example has a <code>workloadSelector</code> that applies to the Istio egress gateway, but you can modify the selector to target any specific workloads where you want to enforce this rule.</p><p>Here&apos;s what this <code>EnvoyFilter</code> does:</p><ul><li>It adds a Lua script that gets executed on each incoming HTTP request (<code>envoy_on_request</code>).</li><li>The Lua script checks if the incoming request is a POST request.</li><li>If it&apos;s a POST request, the script then checks if the request body contains the key &quot;sensitiveData&quot;.</li><li>If such a key is found, the request is immediately rejected with a 400 Bad Request response, along with a message stating that posting sensitive data is not allowed.</li></ul><p>This example provides a basic illustration and might need additional refinements to suit your specific needs. It&apos;s essential to test thoroughly before deploying in a production environment.</p><h2 id="how-to-test-your-egress-gateway">How to Test Your Egress Gateway</h2><p>Once you have your egress gateway set up, it&apos;s crucial to verify that it&apos;s functioning as expected. Below are several methods to test your egress gateway to ensure it&apos;s routing traffic correctly, blocking unauthorized requests, and logging activities as configured.</p><h3 id="1-verify-routing-with-curl">1. Verify Routing with <code>curl</code></h3><p>You can exec into a pod within the cluster and use <code>curl</code> to make requests to an external service that you&apos;ve configured in your ServiceEntry.</p><pre><code class="language-bash">kubectl exec -it [YOUR_POD_NAME] -c [YOUR_CONTAINER_NAME] -- /bin/bash
curl http://example.com
</code></pre><p>The request should succeed if the egress gateway is routing traffic correctly.</p><h3 id="2-check-istio-metrics">2. Check Istio Metrics</h3><p>Istio exposes metrics that can be viewed via Grafana or another monitoring solution. Check the metrics related to the egress gateway to confirm that requests are being processed.</p><h3 id="3-examine-logs">3. Examine Logs</h3><p>You should have centralized logging configured for your egress gateway. Check the logs to confirm that they contain entries for the requests that have passed through the gateway. This is essential for auditing and monitoring.</p><pre><code class="language-bash">kubectl logs -n istio-system $(kubectl get pod -n istio-system -l istio=egressgateway -o jsonpath=&apos;{.items[0].metadata.name}&apos;) -c istio-proxy</code></pre><h3 id="4-test-unauthorized-requests">4. Test Unauthorized Requests</h3><p>Try to access an external service that&apos;s not in your <code>ServiceEntry</code>. The request should be blocked, proving that your security policies are effective.</p><pre><code class="language-bash">kubectl exec -it [YOUR_POD_NAME] -c [YOUR_CONTAINER_NAME] -- /bin/bash
curl http://unauthorized-service.com
</code></pre><h3 id="5-network-policy-test">5. Network Policy Test</h3><p>You can also test the Kubernetes <code>NetworkPolicy</code> to ensure it&apos;s only allowing traffic through the egress gateway. Try to exec into a pod and make an external request without going through the gateway; it should fail to reach the external service.</p><pre><code class="language-bash">kubectl exec -it [ANOTHER_POD_NAME] -c [ANOTHER_CONTAINER_NAME] -- /bin/bash
curl http://example.com</code></pre><p>If the request is blocked, it confirms that your network policy is correctly enforcing the restriction.</p><h3 id="6-check-data-leakage-prevention">6. Check Data Leakage Prevention</h3><p><strong>If </strong>you have configured data loss prevention measures, you can test by trying to send data that should be blocked by your rules. You can use <code>curl</code> to send a POST request with sensitive data to confirm that the egress gateway blocks it.</p><pre><code class="language-bash">kubectl exec -it [YOUR_POD_NAME] -c [YOUR_CONTAINER_NAME] -- /bin/bash
curl -X POST -d &quot;sensitiveData=1234&quot; http://example.com/resource
</code></pre><p>By running these tests, you can ensure that your egress gateway is functioning as expected, thereby adding a robust security layer to your Kubernetes cluster.</p><h2 id="you-are-set">You are set!</h2><p>Egress gateways offer a myriad of security benefits ranging from fine-grained traffic control to compliance aid. Implementing them may add some complexity to your Kubernetes cluster, but the security advantages often outweigh the challenges. Testing your egress gateway ensures that it functions as expected, making it a key component in a mature Kubernetes security model.</p>]]></content:encoded></item><item><title><![CDATA[My Staycation With Proxmox: Sun, Silence, and Servers]]></title><description><![CDATA[A tech adventure to entertain me on a week of summer relaxation.]]></description><link>https://mkdavies.com/my-staycation-with-proxmox-sun-silence-and-servers/</link><guid isPermaLink="false">64dd30e91ba47700011d82f5</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Configuration Management]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Sat, 12 Aug 2023 14:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1587210489914-6992e0797d55?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDl8fGhhbW1vY2t8ZW58MHx8fHwxNjkyMjIxMzAzfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1587210489914-6992e0797d55?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDl8fGhhbW1vY2t8ZW58MHx8fHwxNjkyMjIxMzAzfDA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="My Staycation With Proxmox: Sun, Silence, and Servers"><p>When most think of a summer staycation, images of lazy afternoons in a hammock, or perhaps an at-home spa day come to mind. Me? I envisioned a week of diving into Proxmox, the magic of LXC containers, and the delightful dance of SSH keys and Ansible. Here&apos;s my memorable tech-adventure from the comfort of my own home.</p><h2 id="setting-up-the-proxmox-server">Setting Up The Proxmox Server</h2><p>As my staycation began, I dusted off my trusty Intel NUC Gen 8 &#x2014; a compact powerhouse of a machine that had been awaiting a new lease on life. And what better way to reinvigorate it than with Proxmox?</p><h3 id="preparing-the-flashdrive">Preparing the Flashdrive</h3><p>Before anything else, I needed a bootable USB flash drive with the Proxmox installer. I downloaded the latest Proxmox VE ISO image from their official site. To turn my ordinary USB stick into a bootable Proxmox installer, I used my trusty Windows go-to for this sort of task, <a href="https://etcher.balena.io/?ref=mkdavies.com">Balena Etcher</a>. After selecting the Proxmox ISO and setting my USB as the destination, Etcher did its magic, making my USB drive the key to my Proxmox adventure.</p><h3 id="booting-the-installer-from-usb">Booting the installer from USB</h3><p>Now came the part where I breathed new life into the NUC. Intel&apos;s NUCs have a famously user-friendly BIOS interface, and accessing it was as easy as tapping <code>F2</code> during the boot-up process. Once inside, I navigated to the Boot tab and prioritized the USB drive as the primary boot device.</p><p>Also, and buyer beware, I had to disable secure boot on the NUC to allow the proxmox installer to work.</p><p>Restarting the NUC with the USB drive plugged in, I was greeted with the Proxmox installer screen. Here, the interface is intuitive. The first few prompts gathered essential information such as my time zone, keyboard layout, and installation target (the internal SSD of the NUC, in this case).</p><p>The next steps had me inputting my desired password and email for Proxmox notifications &#x2014; crucial for managing and monitoring the server in the future. After a few more prompts, including network configurations (like setting a static IP, which I highly recommend), the installer began transferring Proxmox VE onto the NUC&apos;s SSD.</p><h3 id="first-boot">First Boot</h3><p>The entire installation process took a bit, giving me just enough time to brew a cup of tea. Once completed, the system prompted for a reboot. Ejecting the USB drive, my NUC booted up, not as the bare-bones computer it was earlier, but as a powerful Proxmox server. I grabbed another device and opened a web browser. Inputting the NUC&apos;s network name followed by the port number 8006 (the default for Proxmox), I was led to a login screen.</p><p>At first, the browser warned me about an insecure connection due to the self-signed certificate Proxmox uses by default. It&apos;s a routine notification, and I proceeded by adding an exception in the browser. On the login page, I entered the username as root and used the password I set during the installation. The moment of truth arrived as I clicked the &quot;Login&quot; button.</p><p>To my delight, I was greeted by the Proxmox web interface &#x2014; a sleek, intuitive dashboard displaying all the server&apos;s statistics at a glance. From here, I could manage virtual machines, LXC containers, storage, and the overall health of the system. The clear layout and detailed graphs felt like a control center, ready to be commanded. With this, my Proxmox adventure truly began, promising endless possibilities right from the cozy confines of my living room.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://mkdavies.com/content/images/2023/08/image.png" class="kg-image" alt="My Staycation With Proxmox: Sun, Silence, and Servers" loading="lazy" width="1254" height="495" srcset="https://mkdavies.com/content/images/size/w600/2023/08/image.png 600w, https://mkdavies.com/content/images/size/w1000/2023/08/image.png 1000w, https://mkdavies.com/content/images/2023/08/image.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>It Lives!</figcaption></figure><h2 id="diving-into-lxc-containers">Diving into LXC Containers</h2><p>LXC (Linux Container) containers are a fascinating realm within the Proxmox environment. These containers have a feather-light footprint, sharing the host&apos;s kernel but functioning almost like standalone Linux systems. And since my goal was to create an LXC container using the latest Debian template, my Proxmox staycation was shaping up to be more exciting than I&apos;d imagined.</p><h3 id="pulling-lxc-templates-in-proxmox">Pulling LXC Templates in Proxmox</h3><p>Proxmox offers an easy way to fetch the latest LXC templates. From the web interface I navigated to the &quot;Local (pve)&quot; storage on the left sidebar, situated under the &quot;Datacenter&quot; node. Clicking on the &quot;Content&quot; tab, I was presented with an option that said &quot;Templates&quot;. Upon selecting &quot;Templates&quot;, a myriad of OS templates was available, each optimized for LXC deployment. From CentOS to Ubuntu, the options were aplenty.</p><p>But my eyes were set on Debian. I clicked the &quot;Download&quot; button next to the latest Debian LXC template, and Proxmox swiftly began fetching it from the online repositories.</p><h3 id="exploring-the-debian-lxc-container">Exploring the Debian LXC Container</h3><p>Once the template was downloaded I clicked the &quot;Create CT&quot; button on the top right. This initiated the LXC container creation wizard.</p><p>The steps were intuitive: naming the container, allocating resources (like CPU and RAM), and assigning a network interface. When I reached the template selection page, I chose the Debian template I had just downloaded. After finalizing the settings and hitting &quot;Finish,&quot; Proxmox got to work, creating an LXC container based on the Debian template.</p><p>The container initialization took mere moments. Once up and running, I clicked on the &quot;Console&quot; tab. This opened a terminal window, dropping me straight into my new Debian environment. To the untrained eye, it seemed as if I was operating a full-fledged Debian server. I began exploring, running a few standard commands (apt update, uname -a, etc.) and felt the exhilarating power and flexibility of LXC.</p><p>Being able to deploy a functional Debian system in minutes, with minimal overhead, underscored why LXC containers are such an asset in the virtualization world. This experiment marked just the beginning of my LXC adventures on Proxmox, and I can&apos;t wait to delve deeper.</p><h2 id="taking-a-step-back-ssh-keys">Taking a step back: SSH keys</h2><p>Woah, not so fast! I neglected to add an important part to the provisioning, including a SSH key. SSH keys offer a secure way to communicate with remote systems, and in the context of my Proxmox and LXC container project, they were paramount for good security practices.</p><h3 id="why-unique-ssh-keys-are-vital">Why Unique SSH Keys Are Vital</h3><p>Using unique SSH keys, specifically for this project, adds an extra layer of security. Imagine having a specialized key for each room in your house instead of one master key for all the doors. If one key were to be compromised, the others would remain secure. In the same vein, having unique SSH keys ensured that if one part of my project were to face security issues, the others would remain unaffected.</p><p>Ed25519 is a public-key signature system known for its robustness and efficiency which offers a fantastic balance between speed and security. It&apos;s designed to be fast in both key generation and verification, all without compromising the level of security. Additionally Ed25519 has been constructed with modern cryptographic best practices in mind, avoiding many pitfalls and vulnerabilities that have been known to affect other algorithms.</p><h3 id="generating-ed25519-keys">Generating Ed25519 Keys</h3><p>Creating secure Ed25519 SSH keys is pretty straightforward. You can use <code>ssh-keygen</code> to generate a key while adding a comment and specify a unique path.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">ssh-keygen \
  -t ed25519 \
  -C &quot;Proxmox LXC Project&quot; \
  -f ~/.ssh/proxmox_lxc_ed25519_key</code></pre><figcaption>Generate the SSH Key</figcaption></figure><p>As always, setting strong permissions on the private key ensures that only the necessary users can access it.</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">chmod 600 ~/.ssh/proxmox_lxc_ed25519_key</code></pre><figcaption>Secure the SSH Key</figcaption></figure><p>By choosing Ed25519 and implementing it within my staycation project, I ensured a modern and secure way to manage my Proxmox server and LXC containers. It&apos;s akin to having a state-of-the-art digital lock system that&apos;s both lightweight and resilient. My staycation, as virtualized as it was, felt even more secure knowing that I was using one of the most robust key algorithms available.</p><h2 id="my-staycation-tech-assistant-ansible">My Staycation Tech Assistant, Ansible</h2><p>Imagine having a personal assistant during your staycation, taking care of all the mundane tasks. That&apos;s Ansible for you, but for servers. With Ansible playbooks, I outlined precisely how I wanted my LXC containers configured. For me, it was like asking for breakfast in bed.</p><h2 id="provisioning-new-containers">Provisioning New Containers</h2><p>Ansible&apos;s declarative language allowed me to define the desired state of my LXC containers. Using Proxmox&apos;s Ansible modules, I created a playbook that provisions new Debian containers.</p><p>This playbook targets the Proxmox host, and by specifying the Debian template, provisions new containers with the previously generated Ed25519 SSH key. This provides secure and password-less SSH access to the container.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">---
- name: Provision Debian LXC containers
  hosts: proxmox_host
  gather_facts: no
  tasks:
    - name: Create new LXC container
      community.general.proxmox:
        api_host: nuc8
        api_user: root@pam
        api_password: {{ API_PASSWORD }}
        vmid: 400
        node: nuc8
        hostname: debiancontainer
        pool: Pool1
        password: {{ CONTAINER_PASSWORD }}
        pubkey: {{ CONTAINER_PUBKEY }}
        ostemplate: &apos;local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst&apos;
        storage: local-lvm
        disk: 20
        cores: 1
        memory: 1024
        swap: 512
        netif: &apos;{&quot;net0&quot;:&quot;name=eth0,bridge=vmbr0,ip=dhcp&quot;}&apos;
        state: present

    - name: Start the container
      community.general.proxmox:
        api_host: nuc8
        api_user: root@pam
        api_password: {{ API_PASSWORD }}
        vmid: 400
        state: started
        timeout: 300</code></pre><figcaption>provision.yml</figcaption></figure><p>Next, I needed to install HashiCorp Packer inside the containers to build new LXC templates. Using an inventory of newly created debian containers, this was a typical Configuration Management task for Ansible.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">---
- name: Install HashiCorp Packer
  hosts: debian_containers
  become: yes
  tasks:
    - name: Download Packer
      get_url:
        url: https://releases.hashicorp.com/packer/1.7.4/packer_1.7.4_linux_amd64.zip
        dest: /tmp/packer.zip

    - name: Unzip Packer
      unarchive:
        src: /tmp/packer.zip
        dest: /usr/local/bin
        mode: 0755

    - name: Verify Packer Installation
      command: packer --version
      register: version
      changed_when: false

    - debug:
        msg: &quot;Packer version: {{ version.stdout }}&quot;</code></pre><figcaption>packer.yml</figcaption></figure><p>What could have been a complex, error-prone manual process was simplified into automated, repeatable playbooks. Ansible allowed me to spend less time on repetitive tasks and more time enjoying my virtualized staycation adventure. Building containers, integrating SSH keys, and installing Packer were turned into a seamless flow, right at my fingertips. By embracing automation, my staycation became a thrilling exploration of what&apos;s possible when technology works for you.</p><h2 id="lessons-from-the-lounge-chair">Lessons From The Lounge Chair</h2><p>1.&#x2003;Home is where the tech is. A staycation can be as enriching as any exotic vacation when you&apos;ve got a fascinating project to immerse yourself in.<br>2.&#x2003;Containers are the perfect summer meal. LXC containers are as light and fulfilling, offering flexibility without the bulk.<br>3.&#x2003;Ansible is the ultimate tech helper. Automating tasks with Ansible is like having a robotic housekeeper. Once you&apos;ve set it up, you can kick back and enjoy your summer reads.</p><p>By the end of my staycation, I felt rejuvenated. Not only had I experienced Proxmox and the related technology I had decided to focus on, but I&apos;d also enjoyed the simple pleasures of home. Tech and relaxation had blended beautifully.</p><p>So next time you&apos;re considering a vacation, remember that sometimes the most exciting adventures are just a keyboard away, right in the comfort of your living room!</p>]]></content:encoded></item><item><title><![CDATA[The Evolution of Functions as a Service (FaaS) and Its Impact on Software Engineering]]></title><description><![CDATA[Functions Past, Present, and Future!]]></description><link>https://mkdavies.com/the-evolution-of-functions-as-a-service-faas-and-its-impact-on-software-engineering/</link><guid isPermaLink="false">64b086b21ba47700011d82db</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[Serverless]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Fri, 14 Jul 2023 14:00:36 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1555949963-aa79dcee981c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGZ1bmN0aW9ufGVufDB8fHx8MTY4OTI5MDQ4N3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1555949963-aa79dcee981c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGZ1bmN0aW9ufGVufDB8fHx8MTY4OTI5MDQ4N3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="The Evolution of Functions as a Service (FaaS) and Its Impact on Software Engineering"><p>The software development field has experienced several paradigm shifts throughout its history, each bringing a new set of values and transforming the industry significantly. One of these transformative technologies, Functions as a Service (FaaS), has emerged as a crucial component in the modern application development arena. Let&apos;s explore the history of FaaS, its value proposition to contemporary software engineering practices, and what the future holds.</p><h2 id="the-emergence-of-faas-tracing-the-origins">The Emergence of FaaS: Tracing the Origins</h2><p>FaaS is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage application functionalities without the complexity of building and maintaining the infrastructure typically associated with developing and launching an application. It is the logical conclusion of the evolution that started with Infrastructure as a Service (IaaS), progressed through Platform as a Service (PaaS), and led to what we now refer to as serverless computing.</p><p>The origins of FaaS trace back to the launch of Amazon&apos;s Lambda in 2014 at the AWS re:Invent conference. Lambda was initially introduced as a compute service to run code in response to AWS internal events such as changes to objects in S3 buckets, updates to DynamoDB tables, or custom events from mobile applications, websites, or other AWS services. This offered developers an entirely new way to execute and manage their applications: they could simply deploy discrete functions, and AWS would handle the rest, including the necessary resources, scaling, and even billing.</p><p>Google followed suit in 2016, launching Google Cloud Functions, and Microsoft unveiled Azure Functions later that same year. These FaaS platforms expanded the initial concept introduced by AWS Lambda, allowing developers to execute code in response to HTTP requests and a wider range of event triggers.</p><h2 id="the-value-proposition-of-faas">The Value Proposition of FaaS</h2><h3 id="scaling-and-cost-efficiency">Scaling and Cost Efficiency</h3><p>The primary value proposition of FaaS is the ability to scale up and down automatically, depending on the demand for the function. Traditional servers require manual scaling, which could lead to over-provisioning (paying for unused capacity) or under-provisioning (not having enough capacity to handle the demand).</p><p>FaaS platforms, on the other hand, are designed to respond to real-time changes in demand. This auto-scaling capability is cost-effective as you only pay for what you use, and it enables the function to accommodate a virtually limitless number of requests.</p><h3 id="improved-developer-productivity">Improved Developer Productivity</h3><p>FaaS dramatically boosts developer productivity by abstracting away the server management aspects. This allows developers to focus on the business logic and function code instead of worrying about servers, capacity planning, and system maintenance. Moreover, since a typical FaaS application is composed of small, discrete, and modular functions, it promotes code reuse, making the development process even more efficient.</p><h3 id="event-driven-and-real-time-processing">Event-Driven and Real-Time Processing</h3><p>The event-driven architecture inherent to FaaS platforms is excellent for handling real-time file processing or data streaming. As soon as the event occurs (such as a file upload), the corresponding function is triggered to process it. This real-time processing ability opens new avenues for responsive and dynamic applications that weren&apos;t feasible or were challenging with traditional architectures.</p><h3 id="integration-and-interoperability">Integration and Interoperability</h3><p>Many FaaS offerings integrate seamlessly with other services provided by the same cloud vendor, facilitating data sharing, state management, and event communication. Furthermore, being HTTP-based, they can interface with any web-accessible service, promoting interoperability.</p><h2 id="faas-in-modern-software-engineering">FaaS in Modern Software Engineering</h2><p>In the context of modern software engineering, FaaS is an enabler for microservices and event-driven architectures. With FaaS, each function can be a separate microservice, simplifying the development process and making applications easier to understand, develop, and test.</p><p>The rise of FaaS also fuels the growth of the DevOps movement. The serverless nature of FaaS reduces the operations overhead, aligning with the DevOps principles of breaking down the barriers between development and operations.</p><p>Furthermore, FaaS plays a pivotal role in data processing and analytics, where functions can be triggered by events to process data and store it or pass it on for further processing. This makes FaaS a powerful tool for building real-time analytics applications and data-driven systems.</p><h2 id="kubernetes-and-faas-a-powerful-alliance">Kubernetes and FaaS: A Powerful Alliance</h2><p>Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications, has become the de facto standard for orchestrating containers. When combined with FaaS, Kubernetes provides an extensible platform for building serverless applications. This allows developers to leverage the benefits of serverless architectures while maintaining the flexibility and control provided by Kubernetes.</p><p>Several FaaS solutions have been built on top of Kubernetes, capitalizing on its capabilities to provide a serverless environment within a Kubernetes cluster. Some of the notable Kubernetes-native FaaS solutions include:</p><h3 id="kubeless">Kubeless</h3><p>Kubeless, a Kubernetes-native serverless framework, enables developers to deploy small bits of code (functions) without worrying about the underlying infrastructure. It leverages Kubernetes resources to provide auto-scaling, API routing, monitoring, troubleshooting, and more. With Kubeless, functions are treated as first-class citizens in the Kubernetes ecosystem and can be managed and scaled just like any other Kubernetes resource.</p><h3 id="openfaas">OpenFaaS</h3><p>OpenFaaS (Functions as a Service) is an open-source serverless framework for Kubernetes which enables developers to run serverless functions anywhere Kubernetes runs. OpenFaaS makes it easy for developers to deploy event-driven functions and microservices to Kubernetes without repetitive, boiler-plate coding. It provides a unified user experience through its UI and CLI, along with a strong focus on developer productivity, ease of use, and operator-friendly tooling.</p><h3 id="knative">Knative</h3><p>Knative is a Kubernetes-based platform that provides a set of middleware components essential for building modern, container-based, cloud-native applications. Knative Serving, one of its core components, offers a FaaS-like developer experience by providing on-demand scaling of applications, routing and network programming, and deployment features like rollouts and rollbacks.</p><p>Each of these FaaS solutions brings unique strengths, and the choice between them depends on specific project requirements and the nature of the applications being developed. Regardless of the choice, combining FaaS with Kubernetes offers an intriguing proposition: it provides the benefits of serverless architecture while preserving the powerful features of container orchestration, resulting in a potent solution for modern cloud-native application development.</p><h2 id="the-future-of-faas-trends-and-trajectories">The Future of FaaS: Trends and Trajectories</h2><p>As the adoption of FaaS continues to rise, new trends and trajectories are beginning to emerge, indicating the future direction of this technology.</p><h3 id="enhanced-developer-experience">Enhanced Developer Experience</h3><p>A significant area of focus will likely be enhancing the developer experience. Today, although FaaS does abstract away much of the infrastructure management, there are still challenges that developers face. Cold starts, function composition, observability, and local testing are common areas of concern. In the future, we can expect to see solutions that address these challenges, simplifying the development process further and making FaaS even more developer-friendly.</p><h3 id="integration-with-machine-learning-ml">Integration with Machine Learning (ML)</h3><p>With the increasing prevalence of machine learning applications, we will likely see tighter integration of FaaS with ML platforms. FaaS is a perfect fit for many machine learning tasks, which are often event-driven and need to scale depending on the volume of data. As such, FaaS providers will likely enhance their capabilities to support machine learning workloads better, such as GPU support, integration with ML platforms, and tools to simplify the deployment of ML models.</p><h3 id="multi-cloud-and-hybrid-faas-solutions">Multi-Cloud and Hybrid FaaS Solutions</h3><p>While FaaS offerings are typically tied to a specific cloud provider, the future is likely to see more multi-cloud and hybrid FaaS solutions. This trend is driven by businesses&apos; desire to avoid vendor lock-in and to leverage the best offerings from each cloud provider. Such FaaS solutions will be capable of running across different cloud environments and even on-premises, providing businesses with greater flexibility and control.</p><h3 id="event-driven-architecture-eda">Event-Driven Architecture (EDA)</h3><p>As FaaS naturally fits into event-driven architecture, the adoption of FaaS will likely drive the adoption of EDA and vice versa. This will lead to an increase in the use of message brokers, event gateways, and other event-driven tools and technologies. Moreover, we can expect to see standardization around event formats and protocols, making it easier to build and integrate event-driven applications.</p><h3 id="edge-computing">Edge Computing</h3><p>With the rise of IoT devices and the need for low latency, there&apos;s a growing trend towards edge computing, where computations are performed closer to the source of data. FaaS is well-suited for edge computing because it can efficiently handle sporadic data and event spikes typical of many IoT applications. In the future, we could see more edge-based FaaS solutions, enabling real-time processing and decision-making at the edge.</p><p>The rapid evolution of FaaS indicates a bright future for this technology. As developers continue to unlock its potential and as the technology continues to mature, we can expect FaaS to play an increasingly vital role in shaping the future of software development and cloud computing.</p>]]></content:encoded></item><item><title><![CDATA[Much More Than "Just Someone Else's Computer"]]></title><description><![CDATA[The Power of the Cloud!]]></description><link>https://mkdavies.com/much-more-than-just-someone-elses-computer/</link><guid isPermaLink="false">649999396b0b0b000133b9e3</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[DevOps]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Mon, 26 Jun 2023 14:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1501630834273-4b5604d2ee31?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGNsb3Vkc3xlbnwwfHx8fDE2ODc3ODg3MTN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1501630834273-4b5604d2ee31?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGNsb3Vkc3xlbnwwfHx8fDE2ODc3ODg3MTN8MA&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Much More Than &quot;Just Someone Else&apos;s Computer&quot;"><p>I had started a new DevOps engineering job a while back and I found myself in the midst of a quirky and creative company. I remember a particularly interesting anecdote involving a steadfast engineer whose catchphrase is still etched in my mind &#x2013; &quot;Cloud technologies are just someone else&apos;s computer&quot;. He was so convicted about this concept, he even used it for part of our shared team password!</p><p>Though it made for a good laugh, I remember being gobsmacked by his oversimplified perspective on Cloud technologies. My experience with AWS has been a journey showing just how amazing, and certainly not &quot;just someone else&apos;s computer&quot;, it truly is.</p><h1 id="apis">APIs</h1><p>First things first, let&apos;s talk about the magic of AWS&apos; robust APIs. Here&apos;s a feature that brings the power of the entire AWS ecosystem right to your fingertips. Want to spin up an EC2 instance? Invoke an AWS Lambda function? Create a new S3 bucket? There&apos;s an API for that!</p><p>The API interactions with AWS services take &quot;infrastructure as code&quot; to a whole new level. You can automate, scale, and manage resources in a programmatic way that would make your head spin. I remember my awe the first time I used CloudFormation to automatically provision, configure, and manage hundreds of servers. It was like being handed the keys to a vast digital kingdom.</p><h1 id="scalability">Scalability</h1><p>Then, there&apos;s scalability &#x2013; AWS&apos;s trump card. Where traditional infrastructure might buckle under the pressure of unexpected traffic, AWS simply grins and scales up. Remember when your favorite website used to crash because too many users were on at the same time? With AWS, those days are gone. In AWS Land, you can autoscale your resources to meet demand and then scale them down when the rush is over.</p><h1 id="functions-as-a-service">Functions as a Service</h1><p>Now let&apos;s not forget the crown jewel of AWS: the incomparable AWS Lambda. The first time I used Lambda, I felt like I had discovered fire! It&#x2019;s an event-driven, serverless computing platform that executes your code based on triggers. Imagine writing code and not worrying about the underlying infrastructure, server provisioning, or scalability. That&apos;s AWS Lambda for you. Just tell it what to do, and it does it. I&apos;m telling you, it&apos;s like having a superpower!</p><h1 id="resiliency">Resiliency</h1><p>AWS&apos; resiliency is also not something you find in &quot;someone else&apos;s computer.&quot; With features like multi-Availability Zone (AZ) setups, the capacity to endure component failures is simply remarkable. Data replication across multiple data centers, automated backup and recovery mechanisms, fault tolerance &#x2013; the list goes on. It&#x2019;s like having an invisible safety net that protects your data and services from going offline.</p><h1 id="costs">Costs</h1><p>And lastly, cost efficiency. With AWS, you pay for what you use. No upfront costs, no need to purchase and maintain expensive hardware, and no sleepless nights worrying about underutilized resources. This can be tricky and can definitely end up costing more than a traditional datacenter, but with discipline and best practices, you can save in the long run.</p><p>In sum, AWS and Cloud technologies are like a sci-fi spaceship compared to &quot;someone else&apos;s computer.&quot; They&apos;re constantly evolving and improving, bringing more value to us, the fortunate explorers navigating the digital cosmos. AWS features like Lambda, EC2, S3, and others are the turbo boosters propelling us to exciting new frontiers.</p><p>So next time someone tries to reduce the cloud to a mere computer on someone else&#x2019;s desk, smile and remember: they just haven&#x2019;t seen the spaceship yet.</p>]]></content:encoded></item><item><title><![CDATA[Breaking Through the Barriers to Innovation]]></title><description><![CDATA[Progressing from Falling Behind to Innovating!]]></description><link>https://mkdavies.com/breaking-through-the-barriers-to-innovation/</link><guid isPermaLink="false">648c843c6b0b0b000133b9b7</guid><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Fri, 16 Jun 2023 14:00:00 GMT</pubDate><media:content url="https://mkdavies.com/content/images/2023/06/Four-states-engineering.png" medium="image"/><content:encoded><![CDATA[<img src="https://mkdavies.com/content/images/2023/06/Four-states-engineering.png" alt="Breaking Through the Barriers to Innovation"><p>In the dynamic field of software engineering, the constant flux of change represents both a challenge and an opportunity. A challenge because managing this change can be daunting and an opportunity because successful navigation through this change can lead to innovation. As identified by Will Larson in his book &quot;An Elegant Puzzle: Systems of Engineering Management,&quot; software engineering teams typically progress through four stages of change: Falling Behind, Treading Water, Repaying Debt, and Innovating. Here we explore how software engineering teams can traverse these stages, focusing on process and cultural changes required to reach the pinnacle of innovation.</p><h2 id="stage-1-falling-behind">Stage 1: Falling Behind</h2><p>Falling behind is a natural stage when a team is unable to keep pace with the demands and technological advancements.</p><h3 id="process-changes">Process Changes</h3><ul><li><strong>Prioritization:</strong> Teams should start by acknowledging the issue at hand and making a list of all tasks in the pipeline. This must be followed by a ruthless prioritization based on the impact on the business and customer value.</li><li><strong>Automation:</strong> Automation can drastically increase efficiency, freeing up time for more complex tasks. Automate repetitive and time-consuming tasks wherever possible.</li></ul><h3 id="cultural-changes">Cultural Changes</h3><ul><li><strong>Open communication:</strong> Encourage a culture where falling behind isn&apos;t seen as failure but as a phase of growth. Open dialogue about the challenges faced by the team can foster solutions from within.</li><li><strong>Learning environment:</strong> Adopt a learning-oriented culture, where keeping up with new technologies and tools becomes a norm.</li></ul><h2 id="stage-2-treading-water">Stage 2: Treading Water</h2><p>In the treading water stage, the team is able to maintain pace with incoming work but struggles to make significant strides forward.</p><h3 id="process-changes-1">Process Changes</h3><ul><li><strong>Agile practices:</strong> Adopt agile methodologies to increase adaptability and response to change. This includes iterative development, stand-up meetings, and a focus on delivering usable software frequently.</li><li><strong>Delegation:</strong> Managers should delegate responsibilities, freeing up time to focus on strategic planning and vision.</li></ul><h3 id="cultural-changes-1">Cultural Changes</h3><ul><li><strong>Feedback culture:</strong> Foster a culture that encourages feedback. Both top-down and bottom-up feedback mechanisms should be in place.</li><li><strong>Empowerment:</strong> Create a culture of empowerment where each team member feels responsible for the project&apos;s success and can make decisions.</li></ul><h2 id="stage-3-repaying-debt">Stage 3: Repaying Debt</h2><p>Repaying debt involves addressing the backlog of issues and technical debt that have accumulated over time.</p><h3 id="process-changes-2">Process Changes</h3><ul><li><strong>Regular audits:</strong> Implement regular system audits to identify technical debts and potential security vulnerabilities.</li><li><strong>Time allocation:</strong> Dedicate a specific amount of time in your sprint cycles to resolving these identified issues.</li></ul><h3 id="cultural-changes-2">Cultural Changes</h3><ul><li><strong>Acknowledge and tackle debt:</strong> Cultivate a culture that views technical debt as a part of the process, not a failure.</li><li><strong>Shared responsibility:</strong> Encourage everyone to take shared responsibility for the accrued debt.</li></ul><h2 id="stage-4-innovating">Stage 4: Innovating</h2><p>This is the ultimate stage where the team has time, resources, and energy to focus on creating novel solutions and approaches.</p><h3 id="process-changes-3">Process Changes</h3><ul><li><strong>Research and Development:</strong> Allocate resources and time for R&amp;D. Experiment with new technologies and approaches.</li><li><strong>Innovation Labs:</strong> Implement &apos;innovation labs&apos; where team members can work on passion projects or explore new ideas outside the daily routine.</li></ul><h3 id="cultural-changes-3">Cultural Changes</h3><ul><li><strong>Foster creativity:</strong> Cultivate a culture that encourages new ideas, and rewards creativity and risk-taking.</li><li><strong>Embrace failure:</strong> Create a safe space where it&apos;s okay to fail. Failures should be viewed as opportunities for learning and not as setbacks.</li></ul><p>Reaching the stage of innovation is an arduous journey, but it&apos;s not unachievable. With the right process and cultural changes, a software engineering team can steadily navigate from falling behind to innovating. It&apos;s important to remember that these stages are not linear and teams may oscillate between them. The journey of innovation isn&apos;t a destination but a continuous process of learning, adapting, and evolving.</p>]]></content:encoded></item><item><title><![CDATA[Implementing Progressive Delivery in a Microservices Environment]]></title><description><![CDATA[<p>In today&apos;s rapidly evolving software landscape, businesses need to deliver updates and new features to their customers quickly and efficiently. Progressive Delivery is an emerging DevOps practice that facilitates safe and controlled software releases. It allows organizations to test and roll out features incrementally, minimizing the risk of</p>]]></description><link>https://mkdavies.com/implementing-progressive-delivery-in-a-microservices-environment/</link><guid isPermaLink="false">645a6a376b0b0b000133b973</guid><category><![CDATA[CI/CD]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Wed, 10 May 2023 14:00:30 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1525011268546-bf3f9b007f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGFycm93c3xlbnwwfHx8fDE2ODM2NTAxMTk&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1525011268546-bf3f9b007f6a?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGFycm93c3xlbnwwfHx8fDE2ODM2NTAxMTk&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Implementing Progressive Delivery in a Microservices Environment"><p>In today&apos;s rapidly evolving software landscape, businesses need to deliver updates and new features to their customers quickly and efficiently. Progressive Delivery is an emerging DevOps practice that facilitates safe and controlled software releases. It allows organizations to test and roll out features incrementally, minimizing the risk of bugs and downtime.</p><h1 id="progressive-delivery-techniques">Progressive Delivery Techniques</h1><p>Progressive Delivery is built upon several techniques that help teams safely release and manage new software features. These include:</p><h2 id="feature-flags">Feature flags</h2><p>Also known as feature toggles, these allow teams to turn features on or off without deploying new code, providing granular control over feature rollout.</p><p>Recently I have used <a href="https://www.getunleash.io/?ref=mkdavies.com">Unleash</a> to manage feature flags. When working in Java, <a href="https://cloud.spring.io/spring-cloud-config/?ref=mkdavies.com">Spring Cloud Config</a> was the feature flag platform of choice.</p><h2 id="canary-releases">Canary releases</h2><p>This approach involves releasing a new version of a service to a small subset of users, allowing teams to test and monitor the new version&apos;s performance before rolling it out to a larger audience.</p><p>For microservices running in Kubernetes, I have leveraged the routing capabilities of <a href="https://istio.io/?ref=mkdavies.com">Istio</a> to provide automated canary rollouts.</p><h2 id="blue-green-deployments">Blue-green deployments</h2><p>This technique involves running two identical production environments (blue and green), with one hosting the new version of the software and the other hosting the current stable version. Traffic is gradually shifted from the old environment to the new one, allowing for safe deployment and rollback if needed.</p><h2 id="ab-testing">A/B testing</h2><p>A popular technique for measuring the impact of new features on user experience, A/B testing involves presenting different variations of the same feature to different user groups and analyzing the results.</p><p>Again, <a href="https://istio.io/?ref=mkdavies.com">Istio</a> has been my go-to when trying to set up a sucessful A/B test.</p><h1 id="microservices-and-progressive-delivery">Microservices and Progressive Delivery</h1><p>While Progressive Delivery offers significant benefits, implementing it in a microservices environment can present unique challenges. Microservices architectures consist of many independent services, often developed and deployed by different teams. Coordinating the rollout of new features across multiple services and ensuring consistency can be complex.</p><h2 id="service-dependencies">Service Dependencies</h2><p>Microservices often have dependencies on other services in the system. When implementing Progressive Delivery techniques, it is crucial to ensure that new features and updates do not introduce breaking changes or negatively impact the performance of dependent services. To tackle this challenge, teams can adopt practices like contract testing, which verifies that a service&apos;s behavior remains consistent with the expectations of its consumers. Another approach is to use versioning to maintain compatibility between services during updates.</p><h2 id="managing-rollouts-across-services">Managing Rollouts Across Services</h2><p>Coordinating the rollout of new features across multiple services can be complex, particularly when different teams are responsible for various services. To ensure consistency and smooth transitions, organizations can adopt a shared set of Progressive Delivery tools and practices, such as standardized feature flags and centralized management for canary releases. This enables teams to have a unified approach to deploying and managing new features in a microservices environment.</p><p>Additionally, providing API version consistency will allow for microservices to continue to support dependent software while building out new functionality along an updated API version.</p><h2 id="ensuring-data-consistency">Ensuring Data Consistency</h2><p>As microservices often rely on independent data stores, maintaining data consistency during Progressive Delivery rollouts can be challenging. Techniques like event-driven architectures and eventual consistency can help manage data across services. Additionally, feature flags can be used to control access to new data models or schemas, allowing teams to gradually transition to new data structures while ensuring consistency.</p><h2 id="monitoring-and-observability">Monitoring and Observability</h2><p>In a microservices environment, it can be difficult to monitor and observe the impact of new features on the entire system. To address this challenge, organizations should invest in comprehensive monitoring and observability tools that can aggregate data across all services. This enables teams to track the performance and behavior of individual services as well as the system as a whole, providing valuable insights for decision-making during Progressive Delivery rollouts.</p><h2 id="testing-and-validation">Testing and Validation</h2><p>Testing and validating new features and updates in a microservices environment can be complicated due to the distributed nature of the system. Adopting practices such as automated testing, integration testing, and end-to-end testing can help ensure the quality and stability of new features. Furthermore, monitoring user feedback and performance during canary releases or A/B tests can provide valuable insights for validating new features before wider deployment.</p><h1 id="case-studies">Case Studies</h1><p>Many organizations have successfully implemented Progressive Delivery in their microservices environments, including leading technology companies like Netflix, Google, and Facebook. By learning from their experiences, you can adopt best practices to improve your own software release process.</p><p>For Netflix, you can refer to their Technology Blog: <a href="https://netflixtechblog.com/?ref=mkdavies.com">https://netflixtechblog.com/</a>. Some of their articles related to Progressive Delivery techniques and their infrastructure are:</p><ul><li>The Netflix Simian Army: <a href="https://netflixtechblog.com/the-netflix-simian-army-16e57fbab116?ref=mkdavies.com">https://netflixtechblog.com/the-netflix-simian-army-16e57fbab116</a></li><li>Global Continuous Delivery with Spinnaker: <a href="https://netflixtechblog.com/global-continuous-delivery-with-spinnaker-df1541e31c59?ref=mkdavies.com">https://netflixtechblog.com/global-continuous-delivery-with-spinnaker-df1541e31c59</a></li></ul><p>For Google, you can refer to their Engineering Blog: <a href="https://developers.googleblog.com/?ref=mkdavies.com">https://developers.googleblog.com/</a>. Their blog covers a wide range of topics on software engineering, infrastructure, and practices.</p><p>For Facebook, you can refer to their Engineering Blog: <a href="https://engineering.fb.com/?ref=mkdavies.com">https://engineering.fb.com/</a>. Some articles related to Progressive Delivery techniques and their infrastructure are:</p><ul><li>Building and scaling the fast and reliable Facebook News Feed: <a href="https://engineering.fb.com/2021/09/21/core-data/news-feed/?ref=mkdavies.com">https://engineering.fb.com/2021/09/21/core-data/news-feed/</a></li><li>How Facebook does A/B testing: <a href="https://engineering.fb.com/2014/06/26/production-engineering/how-facebook-does-a-b-testing/?ref=mkdavies.com">https://engineering.fb.com/2014/06/26/production-engineering/how-facebook-does-a-b-testing/</a></li></ul><p>Progressive Delivery is an essential practice for modern software development, particularly in complex microservices environments. By adopting techniques like feature flags, canary releases, and blue-green deployments, you can reduce the risks associated with software releases and reap the benefits of safer, more controlled software deployments.</p>]]></content:encoded></item><item><title><![CDATA[What is eBPF and Why Should I Care?]]></title><description><![CDATA[eBPF is real, and it's spectacular!]]></description><link>https://mkdavies.com/what-is-ebpf-and-why-should-i-care/</link><guid isPermaLink="false">6458fe426b0b0b000133b930</guid><category><![CDATA[Networking]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Mon, 08 May 2023 13:00:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1517224187585-e3016a5fddc8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDIzfHxmaWx0ZXJ8ZW58MHx8fHwxNjgzNTU0MTQ4&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1517224187585-e3016a5fddc8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDIzfHxmaWx0ZXJ8ZW58MHx8fHwxNjgzNTU0MTQ4&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="What is eBPF and Why Should I Care?"><p>As technology evolves and computer systems become more complex, so does the demand for highly efficient and effective software. One such technology that is gaining significant attention in recent years is eBPF (Extended Berkeley Packet Filter). But what exactly is eBPF, and why should you care?</p><h1 id="what-is-ebpf">What is eBPF?</h1><p>eBPF, or Extended Berkeley Packet Filter, is a highly versatile and programmable kernel-level technology that allows users to run custom, sandboxed programs within the Linux kernel without the need for kernel code modifications or recompilation. Originally designed for network packet filtering, eBPF has now evolved into a generic framework for various applications, such as networking, security, observability, and more.</p><p>At its core, eBPF is a virtual machine that runs custom bytecode programs. These programs are loaded into the kernel and attached to different hooks, such as system calls, network interfaces, or tracepoints, to intercept and process data at runtime. The eBPF programs are written in a restricted C subset, compiled to eBPF bytecode, and then verified and loaded by the kernel using an eBPF system call.</p><h1 id="why-am-i-hearing-about-ebpf-lately">Why Am I Hearing About eBPF Lately?</h1><p>There are several reasons why eBPF has been making headlines and gaining momentum in the technology world. Let&apos;s take a look at some of the key factors contributing to its growing popularity:</p><h2 id="cloud-native-and-containerization-trends">Cloud-native and containerization trends</h2><p>With the rise of cloud-native applications and containerization technologies like Kubernetes and Docker, there is an increasing need for efficient, flexible, and secure networking and observability solutions. eBPF&apos;s ability to instrument and analyze various aspects of the kernel makes it a perfect fit for these modern infrastructures.</p><h2 id="technological-advancements">Technological advancements</h2><p>As eBPF has evolved from its initial focus on network packet filtering, it has gained new capabilities and extensions, making it a more versatile and powerful framework. This evolution has attracted attention from developers and system administrators seeking innovative solutions for diverse use cases.</p><h2 id="industry-adoption">Industry adoption</h2><p>Many large organizations, including Google, Facebook, and Netflix, have adopted eBPF for various applications, such as load balancing, DDoS mitigation, and monitoring. Their success stories and contributions to the eBPF ecosystem have generated significant interest in the technology, inspiring others to explore its potential benefits.</p><h2 id="ebpf-based-projects-and-tools">eBPF-based projects and tools</h2><p>A growing number of open-source projects and tools are leveraging eBPF&apos;s capabilities, further increasing its visibility and ease of adoption. Some notable examples include Cilium (a networking and security project), BCC (BPF Compiler Collection), and Falco (a runtime security project). These projects and tools not only showcase eBPF&apos;s potential but also provide a starting point for developers interested in using the technology.</p><h2 id="community-support-and-events">Community support and events</h2><p>The eBPF community has been actively organizing conferences, meetups, and workshops to share knowledge, experiences, and best practices related to eBPF. Such events help raise awareness about the technology and foster collaboration among developers, researchers, and industry practitioners.</p><h1 id="why-should-i-care">Why Should I Care?</h1><h2 id="performance">Performance</h2><p>One of the most significant advantages of eBPF is its performance. eBPF programs run in the kernel space, allowing them to be executed with minimal overhead and latency. As a result, eBPF-based solutions can offer superior performance compared to user-space alternatives for various use cases, such as network packet filtering, monitoring, and tracing.</p><h2 id="flexibility">Flexibility</h2><p>eBPF&apos;s programmability enables developers to write custom programs tailored to their specific needs. This flexibility allows for innovative and efficient solutions to complex problems that would otherwise require modifying the kernel source code, recompiling, or introducing new kernel modules.</p><h2 id="security">Security</h2><p>eBPF programs are sandboxed, which means they run in a controlled environment with limited access to kernel resources. Before loading an eBPF program, the kernel performs a series of checks to ensure the program&apos;s safety, such as verifying that it does not contain loops or access unauthorized memory regions. This approach significantly reduces the risk of introducing security vulnerabilities or system instability.</p><h2 id="observability">Observability</h2><p>eBPF&apos;s ability to instrument various parts of the kernel makes it an excellent choice for monitoring and debugging complex systems. Developers can use eBPF to gain deep insights into system behavior and performance without incurring a significant overhead or impacting system stability.</p><h2 id="ecosystem">Ecosystem</h2><p>The widespread industry support has resulted in a wealth of open-source tools and libraries that leverage eBPF&apos;s capabilities, making it easier for developers to adopt and benefit from the technology.</p><p>eBPF is an exciting and powerful technology that offers numerous benefits, such as improved performance, flexibility, security, and observability. Its growing popularity and adoption by major industry players have resulted in a vibrant ecosystem of tools and libraries, further enhancing its appeal. By understanding eBPF and its potential applications, developers and system administrators can leverage this technology to build more efficient, secure, and manageable systems.</p>]]></content:encoded></item><item><title><![CDATA[Demo Friday: Getting Started with Ansible]]></title><description><![CDATA[Using Ansible to install Docker and start a "Hello World" NGINX Container]]></description><link>https://mkdavies.com/getting-started-with-ansible/</link><guid isPermaLink="false">644c080b6b0b0b000133b84a</guid><category><![CDATA[Demo Friday]]></category><category><![CDATA[Configuration Management]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Fri, 05 May 2023 13:30:30 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1485827404703-89b55fcc595e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGF1dG9tYXRpb258ZW58MHx8fHwxNjgyNzA1MTg1&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1485827404703-89b55fcc595e?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGF1dG9tYXRpb258ZW58MHx8fHwxNjgyNzA1MTg1&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Demo Friday: Getting Started with Ansible"><p>Companion repo for this demo can be found <a href="https://github.com/LoganAvatar/ansible_demo?ref=mkdavies.com">here</a>.</p><p><a href="https://www.ansible.com/?ref=mkdavies.com">Ansible</a> is an open-source automation tool that helps you configure, manage, and deploy software applications. It is designed to be simple, efficient, and easy to understand, using a declarative language called YAML. In this demo, we will walk through the process of installing Ansible, initializing a new project, and writing a playbook to install Docker and start a &quot;Hello World&quot; NGINX container on a new server.</p><h1 id="prerequisites">Prerequisites</h1><h2 id="virtual-machine-prep">Virtual Machine Prep</h2><p>For this demo, I am using a new Debian 11.7.0 VM. This was provisioned expressly for demo purposes with a SSH server installed. If you want to run through this demo a few times, make sure you snapshot your VM for an easy way to reset it. </p><p>Make sure you get the ip address of the VM that you can use to SSH into the machine, you will need it later. For Debian, I used <code>ip addr show</code>. </p><p>Also, make sure your user is in the <code>sudoers</code> file. I add the following line to allow for passwordless escalation. Remember, this is a demo, not a production system.</p><pre><code class="language-/etc/sudoers">mike  ALL=(ALL) NOPASSWD:ALL</code></pre><h2 id="installing-ansible">Installing Ansible</h2><p>To use Ansible, you need to install it on your control node, which is the machine where you will run your playbooks. This demo assumes you are using either Linux or MacOS as your control node. Follow the official docs to <a href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html?ref=mkdavies.com">install ansible on your control node</a>.</p><p>When you have finished, run this command:</p><pre><code class="language-bash">ansible --version</code></pre><p>The result should be something like:</p><pre><code class="language-output">ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = [&apos;/home/mike/.ansible/plugins/modules&apos;, &apos;/usr/share/ansible/plugins/modules&apos;]
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0]</code></pre><p>In order to use our ssh password, we will also want to install <code>sshpass</code> on the control node.</p><h1 id="initializing-a-new-ansible-project-with-ansible-galaxy">Initializing a New Ansible Project with Ansible Galaxy</h1><p><a href="https://galaxy.ansible.com/?ref=mkdavies.com">Ansible Galaxy</a> is a hub for sharing Ansible roles and collections. It also provides a command-line tool that helps you create, manage, and share your roles. In this step, we will use the <code>ansible-galaxy</code> command to initialize a new role.</p><p>First, Create a new directory for your Ansible project:</p><pre><code class="language-bash">mkdir my_ansible_project
cd my_ansible_project</code></pre><p>Initialize a new role named <code>docker_nginx</code>. This command will create a <code>docker_nginx</code> directory containing the skeleton structure of an Ansible role.</p><pre><code class="language-bash">ansible-galaxy init docker_nginx
</code></pre><p>Create a <code>hosts</code> file to store your inventory of target servers and add your target server&apos;s IP address or hostname under a group named &apos;[servers]&apos;.</p><pre><code class="language-hosts">[servers]
your_server_ip
</code></pre><h1 id="writing-the-ansible-role-tasks-to-install-docker-and-start-an-nginx-container">Writing the Ansible Role Tasks to Install Docker and Start an NGINX Container</h1><p>Open the <code>tasks/main.yml</code> file located in the <code>docker_nginx</code> directory and replace the default content with the following. As I am using Debian, tasks will be specific to that target. If you are using a distribution with a different package manager, such as <code>yum</code>, you will want to look up the syntax for those tasks as well as other items in this file.</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">---
# Everything in this file is specific to a debian target.
- name: Install required packages
  apt:
    name: [&apos;ca-certificates&apos;, &apos;curl&apos;, &apos;gnupg&apos;]
    state: present

- name: Add Docker GPG key
  apt_key:
    url: https://download.docker.com/linux/debian/gpg
    state: present

- name: Add Docker repository
  apt_repository:
    repo: &quot;deb https://download.docker.com/linux/debian bullseye stable&quot;
    state: present

- name: Install Docker
  apt:
    name: docker-ce
    state: present

- name: Ensure Docker service is enabled and running
  systemd:
    name: docker
    state: started
    enabled: yes

- name: Add docker group
  group:
    name: docker
    state: present

- name: Add user to docker group
  user:
    name: &quot;{{ ansible_user }}&quot;
    groups: docker
    append: yes

- name: Start an NGINX container
  shell: |
    docker run -d --name nginx_hello_world -p 80:80 nginx:latest</code></pre><figcaption>main.yml file</figcaption></figure><h1 id="creating-a-playbook-to-include-the-role">Creating a Playbook to Include the Role</h1><p>Navigate back to the project root directory and create a new file named <code>docker_install.yml</code>. Write the following playbook:</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">- name: Install Docker and start NGINX container
  hosts: servers 
  become: yes 
  roles:
    - docker_nginx</code></pre><figcaption>docker_install.yml</figcaption></figure><h1 id="running-the-playbook">Running the Playbook</h1><p>Now that you have created your playbook and role, it&apos;s time to execute the playbook. This command tells Ansible to run the <code>docker_install.yml</code> playbook using the inventory file <code>hosts</code>.</p><pre><code class="language-bash">ansible-playbook -u mike -k -b -i hosts docker_install.yml</code></pre><p>The <code>-u</code> sets the ssh user, <code>-k</code> sets up the ssh password prompt, <code>-b</code> allows for privilege escalation to <code>root</code>, and <code>-i</code> sets the inventory file.</p><p>This was the output, cleaned up to be readable:</p><pre><code class="language-output">$ ansible-playbook -u mike -k -b -i hosts docker_install.yml
SSH password: 

PLAY [Install Docker and start NGINX container] **********************************************************************

TASK [Gathering Facts] **********************************************************************
[WARNING]: Platform linux on host 192.168.0.250 is using the discovered Python interpreter at /usr/bin/python3, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.
ok: [192.168.0.250]

TASK [docker_nginx : Install required packages] **********************************************************************
changed: [192.168.0.250]

TASK [docker_nginx : Add Docker GPG key] **********************************************************************
changed: [192.168.0.250]

TASK [docker_nginx : Add Docker repository] **********************************************************************
changed: [192.168.0.250]

TASK [docker_nginx : Install Docker] **********************************************************************
changed: [192.168.0.250]

TASK [docker_nginx : Ensure Docker service is enabled and running] **********************************************************************
ok: [192.168.0.250]

TASK [docker_nginx : Add docker group] **********************************************************************
ok: [192.168.0.250]

TASK [docker_nginx : Add user to docker group] **********************************************************************
changed: [192.168.0.250]

TASK [docker_nginx : Start an NGINX container] **********************************************************************
changed: [192.168.0.250]

PLAY RECAP **********************************************************************
192.168.0.250              : ok=9    changed=6    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  </code></pre><h1 id="verify-the-nginx-container-deployment">Verify the NGINX Container Deployment</h1><p>To ensure that the NGINX container is running successfully, access the server using its IP address over http (port 80) in your web browser. You should see the default NGINX welcome page.</p><figure class="kg-card kg-image-card"><img src="https://mkdavies.com/content/images/2023/05/image.png" class="kg-image" alt="Demo Friday: Getting Started with Ansible" loading="lazy" width="859" height="454" srcset="https://mkdavies.com/content/images/size/w600/2023/05/image.png 600w, https://mkdavies.com/content/images/2023/05/image.png 859w" sizes="(min-width: 720px) 720px"></figure><p>If you want to see this from the Docker perspective, you can SSH into the server and check the container status by running <code>docker ps</code>. You should see the <code>nginx_hello_world</code> container listed in the output, with the <code>nginx</code> image and port <code>80</code> mapped.</p><h1 id="wrapping-up">Wrapping Up</h1><p>Well, we&apos;ve covered how to install Ansible, create a new project using Ansible Galaxy, and write a playbook and role to install Docker and run an NGINX &quot;Hello World&quot; container on a new server. Using Ansible Galaxy and roles helps you organize your project more efficiently and enables better reusability of your automation code. With this foundation, you can continue to explore more advanced Ansible features and create playbooks to automate a wide range of tasks, making your server management and application deployment more efficient and reliable.</p>]]></content:encoded></item><item><title><![CDATA[Comparing Pros and Cons of CI/CD Tools for Optimal Workflow]]></title><description><![CDATA[Some of my favorite projects around continuous integration and delivery.]]></description><link>https://mkdavies.com/comparing-pros-and-cons-of-ci-cd-tools-for-optimal-workflow/</link><guid isPermaLink="false">644bfda26b0b0b000133b7a1</guid><category><![CDATA[CI/CD]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Wed, 03 May 2023 13:30:08 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1491895200222-0fc4a4c35e18?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fGludGVncmF0aW9ufGVufDB8fHx8MTY4MjcwMjk3Mw&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1491895200222-0fc4a4c35e18?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fGludGVncmF0aW9ufGVufDB8fHx8MTY4MjcwMjk3Mw&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Comparing Pros and Cons of CI/CD Tools for Optimal Workflow"><p>Continuous Integration (CI) and Continuous Deployment (CD) are essential components of the DevOps methodology, aiming to provide fast and efficient software development and deployment. Several CI/CD tools have emerged over the years, each with its unique features, benefits, and drawbacks. Let&apos;s look at some of the most popular CI/CD tools in the market</p><h1 id="jenkins">Jenkins</h1><p><strong>Website</strong>: <a href="https://www.jenkins.io/?ref=mkdavies.com">https://www.jenkins.io/</a></p><p>Jenkins, the friendly butler of the CI/CD world, is an open-source, versatile, and extensible tool that has earned its place as a cornerstone in the DevOps community. With its endless array of plugins and broad support for various platforms and languages, Jenkins caters to developers&apos; diverse needs like a true concierge. Although it may take some time to tame this powerful butler and navigate its steeper learning curve, Jenkins continues to be a popular choice for software development teams, proving that an old faithful can still hold its own in the bustling landscape of CI/CD tools.</p><p><strong>Pros</strong>:</p><ul><li>Open-source: Jenkins is an open-source tool, making it free to use and highly customizable.</li><li>Extensive plugin ecosystem: The Jenkins community has developed a vast array of plugins, enabling users to extend its functionality for different use cases.</li><li>Broad support: Jenkins supports a wide range of platforms, languages, and tools, making it a versatile option for different development environments.</li></ul><p><strong>Cons</strong>:</p><ul><li>Steeper learning curve: Jenkins can be difficult to set up and configure, especially for users who are new to CI/CD.</li><li>Slow performance: Compared to other CI/CD tools, Jenkins can be slower in terms of build and deployment times.</li><li>High maintenance: Due to its open-source nature, Jenkins requires more maintenance and updates compared to some other tools.</li></ul><h1 id="gitlab-cicd">GitLab CI/CD</h1><p><strong>Website</strong>: <a href="https://docs.gitlab.com/ee/ci/?ref=mkdavies.com">https://docs.gitlab.com/ee/ci/</a></p><p>Meet GitLab CI/CD, designed to provide seamless integration and smooth workflows within the GitLab ecosystem. This all-in-one CI/CD tool tackles everything from version control to continuous deployment with the finesse of a master chef, slicing through YAML configuration files like butter. Although its versatility may be confined to the borders of GitLab, it&apos;s scalable, efficient, and easy-to-configure nature makes it a popular choice for those who crave a tightly integrated and streamlined experience in their software development journey.</p><p><strong>Pros</strong>:</p><ul><li>Integrated solution: GitLab CI/CD is a part of the larger GitLab ecosystem, allowing for seamless integration with GitLab repositories and issue trackers.</li><li>Easy configuration: GitLab CI/CD uses a YAML file for configuration, making it simple to set up and maintain.</li><li>Scalable: GitLab CI/CD can scale horizontally using GitLab Runners, allowing users to run multiple builds and deployments simultaneously.</li></ul><p><strong>Cons</strong>:</p><ul><li>Limited to GitLab: GitLab CI/CD is tightly integrated with GitLab, making it less suitable for users who prefer other code repositories, such as GitHub or Bitbucket.</li><li>Less mature ecosystem: Compared to Jenkins, GitLab CI/CD has fewer plugins and integrations available.</li></ul><h1 id="travis-ci">Travis CI</h1><p><strong>Website</strong>: <a href="https://www.travis-ci.com/?ref=mkdavies.com">https://www.travis-ci.com/</a></p><p>Travis CI, the friendly CI/CD sidekick for open-source enthusiasts, made its mark as the go-to companion for GitHub users. With its effortless setup and streamlined YAML configuration, Travis CI ensures your build and deployment pipeline runs smoother than a well-oiled machine. As a hosted solution, this trusty companion takes care of the infrastructure, leaving you to focus on your code. Though Travis CI&apos;s heart lies with open-source projects, it&apos;s also prepared to don a cape for commercial endeavors, making it a versatile sidekick for developers on both sides of the fence.</p><p><strong>Pros</strong>:</p><ul><li>GitHub integration: Travis CI offers excellent integration with GitHub, making it a popular choice for GitHub users.</li><li>Easy setup: Travis CI is easy to set up and configure, with a simple YAML file for build and deployment configuration.</li><li>Hosted solution: Travis CI is a hosted solution, meaning users don&apos;t need to manage their own infrastructure.</li></ul><p><strong>Cons</strong>:</p><ul><li>Limited support: Travis CI primarily supports open-source projects, and commercial projects may require a paid subscription.</li><li>Less flexible: Compared to Jenkins, Travis CI offers fewer customization options and plugins.</li><li>Reliance on third-party services: Travis CI relies on external services for some features, such as artifact storage and deployment, which could lead to vendor lock-in.</li></ul><h1 id="circleci">CircleCI</h1><p><strong>Website</strong>: <a href="https://circleci.com/?ref=mkdavies.com">https://circleci.com/</a></p><p>CircleCI, the speedster of the CI/CD universe, is known for its lightning-fast performance and nimble parallel execution of tasks. With a penchant for agility, this cloud-based superhero integrates seamlessly with various platforms, like GitHub and Bitbucket, to deliver streamlined and efficient pipelines. While its powers may come at a higher cost for larger teams and projects, the ability to save precious time and resources in the development process makes it a valuable ally in the ongoing quest to conquer software development challenges.</p><p><strong>Pros</strong>:</p><ul><li>Fast performance: CircleCI offers faster build times and reduced latency compared to many other CI/CD tools.</li><li>Parallelization: CircleCI supports parallel execution of tasks, which can help reduce build times.</li><li>Strong integrations: CircleCI offers excellent integration with various platforms, such as GitHub, Bitbucket, and Docker.</li></ul><p><strong>Cons</strong>:</p><ul><li>Cost: CircleCI can be more expensive than some other CI/CD tools, particularly for larger teams and projects.</li><li>Limited support for self-hosted instances: While CircleCI does offer a self-hosted option, it is primarily geared towards cloud-based usage.</li></ul><h1 id="github-actions">GitHub Actions</h1><p><strong>Website</strong>: <a href="https://github.com/actions?ref=mkdavies.com">https://github.com/actions</a></p><p>GitHub Actions, the native maestro of the GitHub platform, orchestrates the perfect CI/CD symphony right within the repository it calls home. With a baton made of YAML, it effortlessly conducts build and deployment workflows while harmonizing with the growing marketplace of integrations and actions. Generously offering its services for public repositories, GitHub Actions shines as a beacon for open-source projects. Though it may be tightly bound to the GitHub stage, its seamless integration and versatile performance make it a favorite among developers seeking a well-rounded CI/CD experience</p><p><strong>Pros</strong>:</p><ul><li>Native GitHub integration: GitHub Actions is built directly into GitHub, providing seamless integration with repositories, issue trackers, and pull requests.</li><li>Easy configuration: Like other CI/CD tools, GitHub Actions uses a YAML file for workflow configuration, making it straightforward to set up and manage.</li><li>Marketplace: GitHub offers a marketplace with a growing number of actions and integrations, allowing users to extend the functionality of their workflows.</li><li>Free tier for public repositories: GitHub Actions offers a generous free tier for public repositories, making it an attractive option for open-source projects.</li></ul><p><strong>Cons</strong>:</p><ul><li>Limited to GitHub: GitHub Actions is tied to the GitHub platform, which may not be suitable for users who prefer other code repositories like GitLab or Bitbucket.</li><li>Cost for private repositories: While the free tier is generous for public repositories, the cost can add up quickly for private repositories and large teams.</li><li>Less mature ecosystem: Although the GitHub Actions marketplace is growing, it is not as mature as the Jenkins plugin ecosystem.</li></ul><h1 id="drone">Drone</h1><p><strong>Website</strong>: <a href="https://www.drone.io/?ref=mkdavies.com">https://www.drone.io/</a></p><p>Drone, the container-powered CI/CD trailblazer, confidently cruises through the DevOps landscape, offering a lightweight and efficient pipeline experience. Built on a foundation of containers, Drone ensures consistency in its environment, keeping developers&apos; minds at ease. With an affinity for YAML configuration and a variety of integrations, this nimble tool boldly connects with popular platforms like GitHub, GitLab, and Bitbucket. Though its community may be smaller, Drone&apos;s commitment to flexibility and self-hosted solutions makes it a worthy contender in the ever-evolving CI/CD arena.</p><p><strong>Pros</strong>:</p><ul><li>Container-based: Drone is built on container technology, which allows for easy and consistent environment setup, reducing the risk of environment-related build failures.</li><li>Simple configuration: Drone uses a YAML file for pipeline configuration, making it easy to set up and manage.</li><li>Extensible: Drone supports a plugin system, allowing users to extend its functionality for various use cases.</li><li>Integration with multiple platforms: Drone offers integration with popular version control systems like GitHub, GitLab, and Bitbucket.</li></ul><p><strong>Cons</strong>:</p><ul><li>Smaller community: Compared to other CI/CD tools, Drone has a smaller community and a less extensive plugin ecosystem.</li><li>Self-hosted: While some users may prefer a self-hosted CI/CD solution, others may find the setup and maintenance of Drone to be an additional overhead.</li><li>Limited parallelization: Drone has limited support for parallel execution of tasks, which could lead to slower build times compared to tools with more robust parallelization support.</li></ul><p>The choice of a CI/CD tool largely depends on your project&apos;s specific needs, budget, and development environment. By carefully evaluating the pros and cons of each tool, you can make an informed decision that best suits your project and workflow.</p>]]></content:encoded></item><item><title><![CDATA[Tracing the Evolution of Containerization Technologies]]></title><description><![CDATA[From Chroot to Docker]]></description><link>https://mkdavies.com/tracing-the-evolution-of-containerization-technologies/</link><guid isPermaLink="false">644be1bc6b0b0b000133b737</guid><category><![CDATA[Docker]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Mon, 01 May 2023 13:30:17 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1601897690942-bcacbad33e55?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fHNoaXBwaW5nJTIwY29udGFpbmVyfGVufDB8fHx8MTY4MjY5NDg5MQ&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1601897690942-bcacbad33e55?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDJ8fHNoaXBwaW5nJTIwY29udGFpbmVyfGVufDB8fHx8MTY4MjY5NDg5MQ&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Tracing the Evolution of Containerization Technologies"><p>The age of containerization has revolutionized software development, with Docker emerging as the leading technology in this domain. However, the road to Docker was paved by a series of innovations that laid the foundation for modern containerization practices. Let&apos;s trace the history of technologies that contributed to the birth and rise of Docker.</p><h1 id="chroot-the-grandfather-of-containerization">Chroot: The Grandfather of Containerization</h1><p>The story of containerization begins with chroot, a Unix system call first introduced in 1979. Chroot is often considered the grandfather of containerization, as it introduced the concept of creating isolated environments on Unix-based systems.</p><p>Bill Joy, one of the key developers of BSD Unix, first implemented chroot. The original purpose was to create a test environment for building and installing software packages without interfering with the host system. By changing the apparent root directory of a running process and its children, this provided a simple form of isolation, preventing processes from accessing files and directories outside the specified environment. However, it had limitations. For instance, chroot did not isolate processes at the kernel level or restrict resource usage, such as CPU and memory. Despite these limitations, chroot laid the groundwork for more advanced containerization technologies to come.</p><h1 id="freebsd-jails">FreeBSD Jails</h1><p>In 2000, FreeBSD introduced Jails, an extension of the chroot concept. Poul-Henning Kamp, a Danish computer scientist, developed the Jails feature to address some of the limitations of chroot and further enhance security and isolation.</p><p>FreeBSD Jails extended the idea of isolated environments by creating virtual environments that function almost like independent systems within a single host. Each jail has its own file system, IP address, hostname, users, and process space. Jails can run multiple applications with different dependencies, configurations, and network settings, all within isolated environments on the same host.</p><p>This lead to <strong>improved security</strong>, <strong>resource limits</strong>, <strong>simplifying system administration</strong>, <strong>scalability</strong>, and <strong>portability</strong>. Despite these benefits, FreeBSD Jails were specific to FreeBSD-based systems and were not directly compatible with other Unix-based systems like Linux. The concept of FreeBSD Jails marked a significant milestone in the evolution of containerization, demonstrating the value of isolating applications and processes within independent environments.</p><h1 id="solaris-zones-and-containers">Solaris Zones and Containers</h1><p>Sun Microsystems introduced Solaris Zones (also known as Solaris Containers) in 2004 as a major feature in the Solaris 10 operating system. The goal was to address the need for better process isolation, resource management, and system efficiency.</p><p>Solaris Zones built upon the concepts introduced by earlier containerization technologies by providing even stronger isolation and more comprehensive resource management capabilities. This was driven by the increasing complexity of applications and their dependencies, as well as the growing concerns about system security and resource efficiency.</p><p>By creating isolated virtual environments within a single operating system, Solaris Zones enabled the running of multiple applications with different dependencies and configurations without interfering with each other. Each zone functioned like an independent instance of the Solaris operating system, sharing the same kernel but with its own file system, network configuration, and process space.</p><p>Solaris Zones demonstrated the value of lightweight virtualization and strong isolation in managing complex applications and enhancing system security which influenced the development of later containerization technologies.</p><h1 id="linux-containers-lxc">Linux Containers (LXC)</h1><p>In 2008, LXC emerged as a vital step towards the modern concept of containerization. The primary goal behind LXC was to provide a lightweight, efficient alternative to full virtualization, enabling multiple isolated Linux environments to run on a single host.</p><p>LXC leverages two crucial Linux kernel features: <strong>cgroups </strong>(control groups) and <strong>namespaces</strong>. Cgroups, introduced by Google in 2006, allow the management and allocation of system resources, such as CPU, memory, and I/O, to specific processes and their descendants. Namespaces, on the other hand, provide isolation by partitioning system resources like process IDs, network interfaces, and file systems, ensuring that processes in different namespaces cannot directly interact with each other.</p><p>Although LXC represented a significant step forward in containerization technology, it had some limitations. For example, LXC focused primarily on system-level containerization and did not offer built-in support for application-level containerization, which simplifies application deployment and dependency management. Despite this, LXC played a crucial role in the evolution of containerization technologies. Its incorporation of cgroups and namespaces set the stage for more advanced containerization solutions which now dominate the containerization landscape.</p><h1 id="warden">Warden</h1><p>Developed by Cloud Foundry in 2011, Warden was a container management system that built upon the principles of LXC while focusing on a more user-friendly experience. Cloud Foundry, an open-source Platform as a Service (PaaS) project, created it to address the needs of its multi-tenant platform, which required efficient resource isolation, management, and monitoring.</p><p>Warden introduced several key features and improvements over LXC, making it an important milestone in the evolution of containerization technologies:</p><ol><li><strong>Pluggable Backends</strong>: Warden was designed to be extensible and support different containerization technologies. While it initially relied on LXC, Warden&apos;s architecture allowed for the easy integration of other backends, such as Docker, which eventually replaced LXC as the default backend for Cloud Foundry.</li><li><strong>RESTful API</strong>: Warden introduced a RESTful API for managing containers, enabling developers and administrators to create, destroy, and manage containers programmatically. This API provided a higher level of abstraction and made it easier to work with containers in various languages and environments.</li><li><strong>Resource Limiting and Monitoring</strong>: Warden expanded upon LXC&apos;s resource management capabilities by allowing administrators to set resource limits on CPU, memory, and disk usage for containers. Additionally, Warden provided built-in monitoring capabilities, enabling administrators to track container resource usage and performance over time.</li><li><strong>Container Snapshotting</strong>: Warden introduced container snapshotting, which allowed administrators to create snapshots of running containers and restore them at a later time. This feature enabled easy backup and recovery of container states and simplified application migration between hosts.</li><li><strong>Multi-Tenant Isolation</strong>: Warden was specifically designed to support multi-tenant environments, providing strong isolation between containers and ensuring that applications in one container could not access or impact resources in another.</li></ol><p>While Warden was a significant step forward in container management and provided essential features for Cloud Foundry, it did not gain widespread adoption outside of the Cloud Foundry ecosystem. Looking backward, we of course can see how Docker built upon Warden&apos;s foundation and addressed some of its limitations to quickly became the dominant containerization technology.</p><h1 id="googles-contributions-cgroups-and-lmctfy">Google&apos;s Contributions: cgroups and lmctfy</h1><p>We are getting closer to the last pieces of the puzzle with Google&apos;s contributions.</p><p>Google played a critical role in the advancement of containerization technologies. In 2006, as mentioned with LXC, they introduced cgroups (control groups), a kernel feature that enabled developers to allocate and limit resources to processes, such as CPU and memory. Cgroups laid the groundwork for effective container resource management.</p><p>In 2013, Google released lmctfy (Let Me Contain That For You), a container management system that built upon cgroups and namespaces. Lmctfy focused on providing an efficient and reliable solution for managing containers at scale, significantly influencing the development of Docker.</p><h1 id="the-birth-of-docker">The Birth of Docker</h1><p>Wow. We are here.</p><p>In 2013, Solomon Hykes and the team at dotCloud developed Docker as an open-source project. Docker built upon the foundation laid by previous containerization technologies and introduced features such as a user-friendly CLI, a portable and efficient container format, and the concept of Docker images and registries. These innovations made Docker an accessible and powerful tool for developers, leading to its widespread adoption and dominance in the containerization landscape.</p><p>In 2014, Docker Inc. officially pivoted from a PaaS provider to focus solely on the development of Docker and related technologies.</p><h1 id="today">Today</h1><p>The evolution of containerization technologies, from chroot to Docker, represents a series of innovations aimed at providing developers with efficient, lightweight, and isolated environments for running applications. Docker, with its user-friendly interface, efficient container management, and extensive ecosystem, has emerged as the leading technology in this domain. However, it&apos;s crucial to remember the contributions and advancements made by previous technologies that paved the way for Docker&apos;s success. This journey highlights the importance of continuous innovation, as each technology built upon its predecessor&apos;s strengths and addressed its limitations.</p><p>Today, the containerization landscape is evolving rapidly, with projects such as Kubernetes, OpenShift, and Rancher further building on the capabilities introduced by Docker. As the technology continues to advance, it is essential to acknowledge and appreciate the rich history of containerization and the many innovations that have shaped it, allowing us to appreciate the challenges that have been overcome and anticipate the exciting new developments on the horizon.</p>]]></content:encoded></item><item><title><![CDATA[Observability Tools in Software Engineering: Maximizing Value and Integration]]></title><description><![CDATA[Strategies for Efficiently Leveraging Observability Tools in Modern Software Development]]></description><link>https://mkdavies.com/observability-tools-in-software-engineering-maximizing-value-and-integration/</link><guid isPermaLink="false">644bd6216b0b0b000133b66a</guid><category><![CDATA[Observability]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Fri, 28 Apr 2023 13:30:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1551288049-bebda4e38f71?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fG1ldHJpY3N8ZW58MHx8fHwxNjgyNjkyMjA0&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1551288049-bebda4e38f71?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDF8fG1ldHJpY3N8ZW58MHx8fHwxNjgyNjkyMjA0&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Observability Tools in Software Engineering: Maximizing Value and Integration"><p>Picture this: It&apos;s a dark and stormy night (obviously, because when else do things break?), and your production system starts acting like a toddler on a sugar rush. How do you figure out what&apos;s going on? Observability tools! These help you keep an eye on your software&apos;s health and performance, just like an overly enthusiastic helicopter parent.</p><p>As system complexity grows, so does the need for effective observability tools to monitor and analyze applications in real-time. Here are some key strategies for engineers to integrate and leverage these tools effectively.</p><h1 id="selecting-the-right-tools">Selecting the Right Tools</h1><p>The first step in maximizing the value of observability tools is to choose the right ones for your specific needs. With a plethora of options available, it&apos;s essential to carefully evaluate each tool&apos;s features, scalability, and ease of integration with your existing technology stack. Focus on tools that provide comprehensive insights into the three pillars of observability: <strong>logs</strong>, <strong>metrics</strong>, and <strong>traces</strong>. By selecting tools that cover these aspects, you can gain a holistic understanding of your system&apos;s performance and health.</p><h2 id="identify-your-goals-and-requirements">Identify your goals and requirements</h2><p>Before diving into the selection process, take a moment to outline your specific goals and requirements for observability. Consider factors such as the size and complexity of your system, the types of issues you most frequently encounter, and the metrics you need to track. Having a clear understanding of your objectives will help you narrow down the list of potential tools.</p><h2 id="evaluate-features-and-capabilities">Evaluate features and capabilities</h2><p>As you explore different observability tools, compare their features and capabilities to your requirements. Some crucial features to consider include:</p><ul><li><strong>Data ingestion capabilities</strong>: Ensure the tool can ingest data from various sources like log files, system metrics, and distributed tracing.</li><li><strong>Data visualization and analysis</strong>: Look for tools with robust visualization and analysis features that can help you quickly identify patterns and anomalies.</li><li><strong>Alerting and notification</strong>: Effective alerting and notification systems are critical to promptly identifying and addressing issues in your system.</li><li><strong>Integration with other tools</strong>: Assess how easily the observability tool can integrate with your existing toolset, including issue trackers, communication platforms, and other monitoring tools.</li></ul><h2 id="scalability-and-performance">Scalability and performance</h2><p>As your software system grows and evolves, you need observability tools that can scale with it. Evaluate the tool&apos;s ability to handle increasing volumes of data and how its performance is impacted by this growth. Additionally, check if the tool offers features like data retention policies and configurable sampling rates to help you manage the data effectively.</p><h2 id="ease-of-use-and-customization">Ease of use and customization</h2><p>A user-friendly and customizable observability tool can significantly reduce the learning curve and enable your team to quickly adopt it. Examine the tool&apos;s user interface, documentation, and available support resources to assess its ease of use. Moreover, explore customization options such as custom dashboards, visualizations, and alerts to ensure the tool can be tailored to your specific needs.</p><h2 id="vendor-support-and-community">Vendor support and community</h2><p>A responsive and knowledgeable vendor can make all the difference in your observability journey. Evaluate the level of support provided by the vendor, including documentation, customer service, and response times. Additionally, consider the tool&apos;s user community, as an active and engaged community can provide valuable insights, assistance, and resources.</p><h2 id="pricing-and-licensing">Pricing and licensing</h2><p>Lastly, take into account the pricing and licensing model of the observability tool. Determine if it fits within your budget constraints and provides an appropriate return on investment. Keep in mind factors such as the number of users, data volume, and any additional costs for advanced features or support.</p><h1 id="streamlining-data-collection-and-analysis">Streamlining Data Collection and Analysis</h1><p>Efficient data collection and analysis are at the heart of effective observability in software engineering. Streamlining these processes not only saves time and resources but also helps in quickly identifying and resolving issues in your system. By implementing these best practices and strategies, you will enable your team to rapidly identify and resolve issues in your system, ultimately leading to improved software performance, reliability, and overall user experience.</p><h2 id="standardize-logging-and-instrumentation-practices">Standardize logging and instrumentation practices</h2><p>Establishing consistent logging and instrumentation practices across your development team is crucial for streamlining data collection. Ensure that everyone on the team follows a standard format for log messages, including relevant metadata, such as timestamps, log levels, and contextual information. Additionally, instrument your code to capture vital metrics and traces that can provide insights into your system&apos;s performance and health. Consistent and structured data make it easier to parse, filter, and analyze information quickly.</p><h2 id="automate-data-collection">Automate data collection</h2><p>Automating the process of data collection can significantly reduce manual efforts and the potential for human error. Use agents, libraries, or other integrations provided by your observability tools to automatically collect data from your system. Ensure that these data collection mechanisms are lightweight and have minimal impact on your system&apos;s performance.</p><h2 id="centralize-data-storage-and-management">Centralize data storage and management</h2><p>Having a centralized repository for your logs, metrics, and traces can greatly streamline data analysis. A centralized platform allows for easier data correlation, searching, and visualization, enabling your team to quickly identify patterns and anomalies. Choose a solution that can scale with your system&apos;s growth and handle the increasing volume of data.</p><h2 id="leverage-machine-learning-and-artificial-intelligence">Leverage machine learning and artificial intelligence</h2><p>Utilizing machine learning and artificial intelligence-powered tools can help automate data analysis and surface insights more efficiently. These advanced technologies can process vast amounts of data quickly, identifying trends, anomalies, and correlations that may be difficult for humans to detect. By reducing the time spent on manual analysis, your team can focus on more strategic tasks and resolving identified issues.</p><h2 id="optimize-data-visualization">Optimize data visualization</h2><p>Effective data visualization is essential for streamlining data analysis. Use customizable dashboards and visualizations that allow you to display data in a way that is most meaningful to your team. Ensure that the visualizations are easily interpretable and can be filtered or adjusted as needed. This will enable your team to quickly spot issues and understand the context behind them.</p><h2 id="establish-alerting-and-escalation-policies">Establish alerting and escalation policies</h2><p>Setting up appropriate alerting and escalation policies can help your team prioritize and quickly address critical issues. Define meaningful thresholds for your metrics and establish clear escalation paths to ensure that the right team members are notified when issues arise. Regularly review and adjust these policies to minimize alert fatigue and maintain their effectiveness.</p><h1 id="integrating-observability-into-the-development-lifecycle">Integrating Observability into the Development Lifecycle</h1><p>To fully harness the power of observability tools, integrate them into every stage of the software development lifecycle. By involving observability from the planning and design phases through deployment and maintenance, you can proactively identify potential issues and address them before they escalate. A holistic approach to observability allows for continuous monitoring and feedback, ultimately leading to higher-quality software, reducing the overall time spent on debugging and increasing the quality of the end product.</p><h2 id="planning-and-design">Planning and Design</h2><p>During the planning and design phase, consider the observability requirements and goals for your system. Determine the key performance indicators (KPIs) you need to track and establish a clear understanding of the system&apos;s expected behavior. Identify potential risks and challenges related to observability and define strategies to address them. Ensure that your architecture supports the required level of monitoring, logging, and tracing.</p><h2 id="development">Development</h2><p>As your team writes code, ensure that they adhere to standardized logging and instrumentation practices. Encourage developers to think about observability while writing code and to include meaningful log messages, metrics, and traces that provide insight into the system&apos;s behavior. Integrate observability tools and libraries into your development environment, making it easy for developers to access and use them.</p><h2 id="testing-and-qa">Testing and QA</h2><p>During the testing and QA phase, leverage observability data to validate that the system behaves as expected under various conditions. Use monitoring data to identify performance bottlenecks, resource constraints, and other issues that may impact the system&apos;s performance and stability. Incorporate feedback from observability tools into your test plans and continuously refine your testing strategies based on the insights gained.</p><h2 id="deployment-and-release">Deployment and Release</h2><p>As you deploy and release your software, utilize observability tools to monitor the rollout process and ensure a smooth transition. Set up automated alerts and notifications to inform your team of any issues that arise during deployment. Monitor key metrics and performance indicators to validate that the new release meets your expectations and does not introduce any unforeseen issues.</p><h2 id="monitoring-and-maintenance">Monitoring and Maintenance</h2><p>During the monitoring and maintenance phase, continuously collect and analyze observability data to identify trends, anomalies, and potential issues. Use this data to inform your maintenance and optimization efforts, addressing any issues that arise proactively. Establish a feedback loop between the monitoring and maintenance phase and the planning and design phase to continuously refine and improve your system.</p><h2 id="continuous-improvement">Continuous Improvement</h2><p>Embrace a culture of continuous improvement, using insights from observability tools to inform your development processes and practices. Encourage your team to learn from incidents and iteratively refine your software based on the insights gained. Regularly review and adjust your observability strategy to ensure it continues to meet your needs as your system evolves.</p><h1 id="encouraging-a-culture-of-observability">Encouraging a Culture of Observability</h1><p>Encouraging a blameless culture of observability within your organization can be an engaging and enjoyable journey if you approach it with a sense of fun and creativity. Transform your team into a group of software detectives, eager to proactively uncover clues and solve performance mysteries. Here are some ways to foster a culture of observability and keep your team excited about monitoring and analyzing your system&apos;s behavior.</p><h2 id="gamify-observability">Gamify Observability</h2><p>Who doesn&apos;t love a little friendly competition? Introduce gamification elements to your observability efforts, such as leaderboards for resolving issues, badges for achieving specific milestones, or even a &quot;Detective of the Month&quot; award. By making observability a fun and engaging experience, your team will be more motivated to monitor system performance and proactively address potential issues.</p><h2 id="host-observability-workshops">Host Observability Workshops</h2><p>Organize interactive workshops and training sessions that focus on different aspects of observability, such as log analysis, tracing, or performance optimization. Encourage your team to participate actively in these sessions, sharing their experiences and insights. You can even introduce themed workshops, like a &quot;Sherlock Holmes and the Case of the Missing Logs&quot; session, to add an extra layer of enjoyment.</p><h2 id="encourage-storytelling">Encourage Storytelling</h2><p>Invite your team members to share their &quot;observability success stories&quot; during regular team meetings or in dedicated channels on your communication platform. By highlighting positive experiences and lessons learned, you can reinforce the value of observability and inspire your team to continually improve their monitoring and analysis skills.</p><h2 id="create-an-observability-book-club">Create an Observability Book Club</h2><p>Start an observability-themed book club, where your team can read and discuss books, articles, or blog posts related to monitoring, performance optimization, and system reliability. Encourage lively discussions and debates, and use these conversations as a springboard to explore new ideas and approaches to observability within your organization.</p><h2 id="organize-observability-hackathons">Organize Observability Hackathons</h2><p>Host hackathons focused on observability challenges, where your team can work together to develop innovative solutions, improve existing monitoring practices, or explore new tools and technologies. These events can foster collaboration, creativity, and a sense of camaraderie among your team members, helping to further instill a culture of observability.</p><h1 id="continuously-evaluating-and-evolving-your-observability-strategy">Continuously Evaluating and Evolving Your Observability Strategy</h1><p>In the fast-paced world of software engineering, staying agile and adapting to changes is crucial for success. Your observability strategy should be no exception. Embrace adaptability and foster a culture of continuous improvement to ensure that your observability efforts remain effective and aligned with your organization&apos;s goals.</p><h2 id="regularly-assess-your-tools-and-practices">Regularly Assess Your Tools and Practices</h2><p>Set up a recurring schedule to review the effectiveness of your observability tools and practices. Assess whether they continue to meet your needs and identify any gaps or areas for improvement. This review process should involve input from various team members, including developers, operations, and management, to gain a comprehensive understanding of your observability efforts.</p><h2 id="stay-informed-on-industry-trends">Stay Informed on Industry Trends</h2><p>Keep an eye on emerging trends, best practices, and new tools in the observability landscape. Attend industry conferences, webinars, and meetups, and engage in online communities and forums to stay up to date on the latest developments. By staying informed, you can identify potential opportunities to enhance your observability strategy and make data-driven decisions on adopting new technologies or methodologies.</p><h2 id="learn-from-your-incidents">Learn from Your Incidents</h2><p>Treat each incident as a learning opportunity to refine and improve your observability strategy. Conduct blameless post-mortems to analyze what went wrong, identify root causes, and determine how your observability practices can be improved to prevent similar issues in the future. Encourage a culture of continuous learning and improvement within your organization, where team members feel empowered to share their insights and contribute to the evolution of your observability strategy.</p><h2 id="experiment-and-iterate">Experiment and Iterate</h2><p>Don&apos;t be afraid to experiment with new observability tools, methodologies, or approaches. Set up proof-of-concept projects or sandbox environments where you can test and evaluate new solutions without impacting your production systems. Gather feedback from your team on the effectiveness of these experiments and use this information to iterate on your observability strategy.</p><h2 id="measure-the-impact-of-your-observability-efforts">Measure the Impact of Your Observability Efforts</h2><p>Establish key performance indicators (KPIs) to measure the impact of your observability efforts on your software systems and overall business objectives. Some examples of KPIs include mean time to resolution (MTTR), system uptime, and customer satisfaction scores. Regularly review these KPIs to track your progress and identify areas where your observability strategy can be further optimized.</p><p>Observability tools play a vital role in modern software engineering, providing essential insights into system health and performance. By selecting the right tools, streamlining data collection and analysis, integrating observability throughout the development lifecycle, fostering a culture of observability, and continuously evaluating and evolving your strategy, you can maximize the value of these tools and ensure your software systems are reliable, efficient, and resilient.</p>]]></content:encoded></item><item><title><![CDATA[My GitHub Copilot Adventure: Revolutionizing the Software Engineering Landscape]]></title><description><![CDATA[Discover how my first encounter with GitHub Copilot transformed my coding experience and the future of software engineering!]]></description><link>https://mkdavies.com/my-github-copilot-adventure-revolutionizing-the-software-engineering-landscape/</link><guid isPermaLink="false">644ad6239e57820001d63b26</guid><category><![CDATA[AI]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Thu, 27 Apr 2023 20:18:16 GMT</pubDate><media:content url="https://mkdavies.com/content/images/2023/04/copilot.png" medium="image"/><content:encoded><![CDATA[<img src="https://mkdavies.com/content/images/2023/04/copilot.png" alt="My GitHub Copilot Adventure: Revolutionizing the Software Engineering Landscape"><p>As a software engineer, I&apos;m always on the lookout for new tools and technologies that could potentially change the way I work. That&apos;s why, when I heard about GitHub Copilot, I couldn&apos;t wait to try it out. The prospect of having an AI-powered coding assistant working alongside me seemed like a dream come true. After putting it through its paces, I can confidently say that my first GitHub Copilot experience was nothing short of magical!</p><h3 id="ai-to-the-rescue">AI to the Rescue</h3><p>GitHub Copilot, the brainchild of GitHub and OpenAI, is an AI-powered coding assistant that learns from the vast amounts of public code available on GitHub. It uses the highly advanced GPT-4 model to understand your programming context and suggest relevant code snippets.</p><p>As I fired up my trusty code editor and started typing, I could feel Copilot&apos;s presence. Almost like a seasoned mentor or a helpful colleague, it immediately began offering suggestions for completing my code. Not only did it save me time, but it also introduced me to new approaches and techniques I hadn&apos;t considered before.</p><h3 id="an-unforgettable-journey">An Unforgettable Journey</h3><p>One memorable moment occurred while I was working on an AngularJS Project. I had a basic understanding of Angular but needed help with more advanced usage. Enter Copilot! It suggested an elegant and efficient way to import libraries and organize my data. I was simply amazed by its ability to understand my requirements and provide a perfect solution.</p><p>Another exciting discovery was Copilot&apos;s knack for providing code in different programming languages. I was working on a piece of Golang code that I wanted to serve through Docker. To my astonishment, Copilot picked up on this and offered the build commands and correct port exposure. While it wasn&apos;t 100% perfect, it provided an excellent starting point and saved me time that I could spend elsewhere.</p><h3 id="copilots-impact-on-software-engineering">Copilot&apos;s Impact on Software Engineering</h3><p>My experience with GitHub Copilot has led me to believe that it has the potential to revolutionize software engineering. The benefits are hard to ignore:</p><ol><li><strong>Increased Efficiency</strong>: Copilot&apos;s real-time suggestions make writing code quicker and more efficient. This allows developers to spend more time on design and architecture, or even take on additional projects.</li><li><strong>Accelerated Learning</strong>: Copilot offers an opportunity to learn new languages, libraries, and techniques by providing working examples tailored to your specific context.</li><li><strong>Reduced Barrier to Entry</strong>: By assisting with complex tasks, Copilot lowers the barrier to entry for new developers and promotes inclusion in the software engineering field.</li><li><strong>Improved Collaboration</strong>: Copilot can serve as a &quot;common ground&quot; for developers with varying skill levels, fostering collaboration and the sharing of knowledge within teams.</li></ol><h3 id="embracing-the-change">Embracing the Change</h3><p>Of course, there are concerns about job displacement and the possible loss of creativity in programming. However, my personal experience has shown me that Copilot is more of a collaborative partner than a replacement for human developers. By embracing this new technology, we can leverage its potential to augment our skills, make us better engineers, and ultimately create a more innovative and diverse software engineering landscape.</p><p>So, buckle up and join me on this GitHub Copilot adventure! It&apos;s a thrilling ride that promises to change the way we write code, learn, and collaborate in the realm of software engineering.</p>]]></content:encoded></item><item><title><![CDATA[Gitea Actions Announced in Preview]]></title><description><![CDATA[Another Tool Emerges!]]></description><link>https://mkdavies.com/gitea-actions-announced-in-preview/</link><guid isPermaLink="false">644c03636b0b0b000133b816</guid><category><![CDATA[CI/CD]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Mike Davies]]></dc:creator><pubDate>Fri, 24 Mar 2023 17:35:00 GMT</pubDate><media:content url="https://mkdavies.com/content/images/2023/04/gitea.png" medium="image"/><content:encoded><![CDATA[<img src="https://mkdavies.com/content/images/2023/04/gitea.png" alt="Gitea Actions Announced in Preview"><p>Just a quick post. While I use GitHub and GitLab professionally, I use Gitea personally to store my home experiments. </p><p>Imagine my surprise when I found out Gitea just <a href="https://blog.gitea.io/2023/03/gitea-1.19.0-is-released/?ref=mkdavies.com#highlights">released their own Actions CI solution</a>! I&apos;m excited to see more and more companies competing in the modern CI space and looking forward to the innovation that is ahead of all of them!</p>]]></content:encoded></item></channel></rss>