Monday, April 6, 2026

Hosted Server - Linode

The Why:

While a dedicated home lab provides an excellent environment for consistent and controlled testing, there are inherent limitations that often necessitate moving experiments into a more realistic, "real-world" operating condition. This requirement for external, live-environment testing prompted me to explore options for online Linux servers.

My search for a reliable platform led me to YouTube, specifically to the highly informative channel, https://www.learnlinux.tv/. This channel, known for its practical advice on open-source technologies, provided a strong recommendation for Linode (https://www.linode.com/) as a premier cloud hosting provider.

After reviewing their offerings and the positive community feedback, I made the decision to sign up. A significant factor in this choice was their generous promotional offer: a $100.00 credit. This credit is substantial, as it effectively covers the cost of a basic server plan for approximately four months. This allowance provides ample time to conduct extensive testing, familiarize myself with the cloud environment, and execute the real-world condition simulations that my home lab setup simply couldn't replicate. The move to Linode represents a valuable step in expanding the scope and of my technical skills.


System Specs:
    
VM Hardware & OS:
  • 2 CPU Cores (Shared CPU)
  • 80 GB Storage
  • 4 GB RAM
  • Network (4 GB available pool)
  • Firewall
  • Ubuntu Server

Starting the build:


My initial steps in setting up the new Linode server were heavily focused on establishing a robust security posture and an efficient storage architecture. The foundational security measure I implemented was full-disk encryption, utilizing the available Linode tools to encrypt the entire 80GB block storage device. This was a critical step to ensure data security at rest, protecting all sensitive information and project files even if the physical media were compromised.

Following the encryption, I addressed the storage layout. The 80GB drive was logically partitioned into two distinct volumes. One partition was specifically designated for active project files and the core operating system, while the second partition was reserved exclusively for automated backups and archival data. This segregation is vital for maintaining data integrity and simplifying recovery procedures. Linode's straightforward interface and comprehensive documentation made the setup and configuration of this block device partitioning process remarkably smooth.


Building the moat around the castle:

With the storage and underlying security firmly in place, I immediately prioritized system maintenance by patching the Operating System. Running (sudo apt update && sudo apt upgrade) ensured that the OS and all installed packages were up to date, mitigating known vulnerabilities before they could be exploited.

Note: Keeping your system updated is critical to ensure that security issues are patched. I would recommend it, especially on a headless server running auto updates. 


Run:

sudo apt-get install unattended-upgrades

sudo apt-get install unattended-upgrades



The final phase of this initial configuration involved a comprehensive overhaul of the system's network-facing services and firewall rules. Recognizing that default settings are often the first target for attackers, I significantly hardened the system:
  • Port Obfuscation: The default ports for all remote management services, such as SSH, were changed to non-standard ports. This simple measure dramatically reduces the noise from automated port scanning bots.


  • Principle of Least Privilege: I meticulously adjusted the firewall rules to adopt a "deny all by default" posture. Only the necessary ports for active services were explicitly opened, and even those were often restricted by source IP address where possible.


  • Service Deactivation: Furthermore, remote tools and administrative services that are not in constant use are configured to be disabled by default. They are only temporarily enabled on an as-needed basis, significantly minimizing the system's attack surface during periods of inactivity.

Doing the thing:


Now that all the prerequisites are in place, I can proceed with the build, AKA the fun part.

I began setting up a web server by configuring Apache, registering the domain, and linking DNS services via Cloudflare. Establishing a robust and dependable server environment from the outset is vital for ensuring accessibility, reliability, and effective management.

To kick off, I carefully set up Apache HTTP Server, chosen because of its long-standing reputation as a dependable, versatile, and open-source web server platform. This involved configuring virtual hosts, tweaking performance settings, and ensuring all the necessary modules were enabled for the app I planned to run.


To establish an online presence alongside the web server, I took two key steps:

  1. Domain Registration: The unique domain name was successfully registered, securing the server's primary web address and brand identity.

  1. DNS Integration with Cloudflare: The Domain Name System (DNS) was integrated via Cloudflare to manage name server records and enhance performance and security. Cloudflare's benefits include:

    • Performance: A global Content Delivery Network (CDN) caches static assets, reducing latency.

    • Security: Provides protection against threats like DDoS attacks and offers a Web Application Firewall (WAF).

    • Management: Centralizes DNS control for traffic routing and subdomains.


The initial combination of Apache and Cloudflare's networking and security has established a strong, scalable foundation for the entire project. Even in these early stages, unwelcome visitors attempted to gain access. The quick, eye-opening insights from the reporting tools prompted me to immediately strengthen security. By implementing country blocking and other protective measures, I was able to keep everything safe and effectively repel the unauthorized attempts.


Final Thoughts.

This was such a fun project! It really took me back to the early days of setting up a server and hosting your own site, which was always an exciting experience. I can’t wait to see where this journey will take me next.

- TheMacRat


Monday, March 23, 2026

Project: Dual-Purpose Workstation Build

The goal was to design and build a new, dual-purpose workstation for testing and daily use.
Workstation Intent & Platform:

  • Primary Function: To serve as a faster Linux daily driver.
  • Secondary Function (Lab): To host multiple Virtual Machines (VMs) for testing and review purposes.
  • Design Constraint: The system must be a self-contained lab, kept to a minimum, and fit within my downsized desk space due to home space limitations.

System Specs:

Hardware:

      • Host: OptiPlex SFF 7010
      • CPU: 13th Gen Intel(R) Core(TM) i5-13500 (20) @ 4.80 GHz
      • GPU: Intel AlderLake-S GT1 @ 1.55 GHz [Integrated]
      • RAM: 32GB

Storage:

      • Drive #1: NVME 512GB
      • Drive #2: SATA SSD 256GB
      • Drive #3: SATA 7200RPM 1TB

Operating System:

      • OS: Ubuntu 25.10 x86_64
      • Kernel: Linux 6.17.0-19-generic
      • DE: KDE Plasma 6.4.5
      • WM: KWin (Wayland)

Phase 1: Encryption

The recent system assembly was a highly successful and engaging project, with a primary focus on security enhancement. This emphasis led to the crucial decision to mandate LUKS (Linux Unified Key Setup) encryption across all storage drives. This robust measure ensures full-disk encryption, making data at rest inaccessible without the correct passphrase, even in the event of physical compromise or theft of the drives.

This level of security introduces a key operational point concerning the boot process. Since the drives are encrypted, the system must pause and prompt for the decryption key before the operating system (OS) can complete its boot sequence and access the file system.

Consequently, this setup requires familiarity with configuring/etc/fstab. This file dictates how disk partitions and other block devices are mounted into the file system. In this encrypted environment, it is essential to correctly configure the fstab entries to manage the LUKS volumes. Specifically, the OS must be able to recognize, unlock (following successful pre-boot authentication), and subsequently mount the encrypted file systems normally post-boot. This manual step is critical to ensuring seamless, persistent data access once the initial security barrier is cleared.

While getting this configuration right was time-consuming, the final outcome perfectly matches the desired security goal. The only trade-off is the inability to perform a simple remote reboot and return to a running state; due to hardware-level encryption, a manual login is required for the boot process to complete.



Phase 2: Operating System Build and Configuration

The second phase of this deployment, the actual installation and configuration of the Operating System, was generally a smooth process for a custom build. Despite the overall ease, a few expected hardware-related hurdles surfaced, primarily involving the wireless networking components.

The most significant challenge involved the integration and stability of the system's Wi-Fi card. This issue was compounded by the physical constraints of the operating environment. Specifically, workstation location, infrastructure, and layout made it completely infeasible to establish a reliable, hardwired Ethernet connection to the main router. While I recognize that a direct, wired connection would have been the most robust and efficient option for a system of this nature, project success often requires pragmatic flexibility. Given the pre-existing spatial limitations, the decision was made to proceed with the wireless solution.

This choice, while not ideal from a pure performance standpoint, was a necessary compromise to meet the project's timeline and budget constraints within the existing space. The system was successfully configured to operate stably over the Wi-Fi network, ensuring all necessary services and applications could run effectively despite the inherent bandwidth and latency limitations of a wireless setup.



Phase 3: Virtual Machines - Implementation and Rationale

The design for this phase required deploying several virtual machines (VMs). The critical technical challenge was selecting a suitable hypervisor for hosting this VM directly on the primary workstation. After careful consideration, I made the pragmatic decision to employ Oracle VirtualBox.

The choice of VirtualBox was primarily driven by established proficiency; it is a tool with which I have significant familiarity and experience, ensuring a smoother, more efficient deployment process. While I am fully cognizant of more technically advanced, often higher-performing alternatives, such as those that leverage hardware virtualization natively at the kernel level—an excellent example being virt-manager on Linux systems—a deliberate decision was made to prioritize expediency and reliability for this specific project iteration.

The recognized advantage of hypervisors like virt-manager is their ability to use the native operating system's kernel for direct access to hardware resources, which typically translates into superior I/O performance and reduced overhead compared to Type 2 hypervisors like VirtualBox. However, integrating and mastering a new hypervisor environment would have introduced an unavoidable time sink and a steeper learning curve, diverting resources away from the core project objectives.


Therefore, the exploration, benchmarking, and implementation of high-performance, kernel-integrated virtualization platforms have been designated for a separate, dedicated project. For the current needs, VirtualBox is sufficient, as it provides the necessary feature set and stability to meet all established requirements without adding unnecessary complexity. It should also be noted that the current load on these systems is extremely low.


Phase 4: VM Builds.

This final stage of the project necessitates the comprehensive buildout and configuration of several Linux distributions. Initially, my efforts resulted in approximately 12 separate virtual machines. This broad initial selection was crucial for testing compatibility within the specific virtualized environment. Following an extensive testing phase, it became necessary to rigorously assess and remove operating systems that exhibited persistent instability, performance issues, or inherent incompatibilities in the VM environment. It was determined that some distributions, due to their kernel requirements or hardware-specific drivers, would perform optimally only on native hardware, leading to their exclusion from this virtualized testing suite. Based on these critical assessments and optimizations, I have now settled on a refined, stable, and highly functional collection of the following core Linux systems.

  • Lubuntu
  • Kubuntu
  • MX Linux
  • OpenSUSE (TumbleWeed)
  • Linux Light
  • Manjaro
  • Debian 13
  • Bodhi Linux
  • Ubuntu Server (LTS)

Due to RAM and CPU limitations, I can only run 4 plus the Main OS at the same time. At some point, I will migrate these systems to a proper hypervisor such as Proxmox.


Final Thoughts.



This complex project took over two months, focusing entirely on rigorous testing and careful rebuilding to achieve a highly stable and reliable user experience. While current achievements are positive, there is a significant opportunity for further refinement. The key takeaway is that true optimization requires active engagement and practical execution—doing the work—not just stating the intention. This initial success stems directly from the philosophy of action over mere intent.

- TheMacRat

Linux Distro's & Desktops Environments

The choices we make have consequences:


Here's a brief account of a minor setback I hit on my Linux journey. After settling on my preferred daily-driver distribution, I decided to build a second system. The goal was to see how straightforward it would be to replicate my Desktop Environment setup on a new workstation. I ran into a couple of snags. Hardware limitations are an unavoidable part of IT life—performance issues can arise from poor CPU performance, limited video RAM, or insufficient disk space. However, the bigger issue I discovered was that the second system was hanging due to dependencies required by GNOME. Committed to KDE, I decided to completely remove GNOME using sudo commands, which I believed had done the job. However, I still encountered residual issues. I could have spent hours diving into logs to play a frustrating game of rip-and-disable, but honestly, who has the time? Then, a Reddit post fundamentally changed my approach to system building. The post simply stated, “If you want to use Ubuntu for your OS and don't use or plan on using GNOME, install the right flavor of Ubuntu with the desktop you use.” This took me back a bit as I was like, " What, and what are the other Flavors they are speaking of? Come to find out, there are definitely different “Flavors” of Ubuntu that are built and tuned for the specific DE. This means that one of those systems is tuned for the one I wanted to use (KDE).


https://ubuntu.com/desktop/flavors That's why I landed on Kubuntu. It was absolutely perfect. The moment I installed that specific version, all the headaches I'd had trying to shove the KDE environment onto a GNOME base just vanished. Okay, sure, maybe that was the easy way out, but hey, I'm not getting paid to mess around with this stuff, and my time is precious.


- TheMacRat

Sunday, March 22, 2026

First Linux Build in decades...

So after about 20 years, I decided to dip my toes back into the Linux waters.

Here are the general details and scope of this project.

Project Scope:
  • Find some old hardware to reuse for this project. 

  • Build and test using a Linux Distro as a daily driver. 

    • Choose OS - Decided on Ubuntu 25.10

    • Choose DE - KDE

Saturday, March 21, 2026

Hello and Welcome!

Hello and Welcome!

This is the first post for blog.TheMacRat.cloud. I will be posting technology related information and project details here. 


- TheMacrat