Monday, March 23, 2026

Project: Dual-Purpose Workstation Build

The goal was to design and build a new, dual-purpose workstation for testing and daily use.
Workstation Intent & Platform:

  • Primary Function: To serve as a faster Linux daily driver.
  • Secondary Function (Lab): To host multiple Virtual Machines (VMs) for testing and review purposes.
  • Design Constraint: The system must be a self-contained lab, kept to a minimum, and fit within my downsized desk space due to home space limitations.

System Specs:

Hardware:
      • Host: OptiPlex SFF 7010
      • CPU: 13th Gen Intel(R) Core(TM) i5-13500 (20) @ 4.80 GHz
      • GPU: Intel AlderLake-S GT1 @ 1.55 GHz [Integrated]
      • RAM: 32GB

Storage:
      • Drive #1: NVME 512GB
      • Drive #2: SATA SSD 256GB
      • Drive #3: SATA 7200RPM 1TB

Operating System:
      • OS: Ubuntu 25.10 x86_64
      • Kernel: Linux 6.17.0-19-generic
      • DE: KDE Plasma 6.4.5
      • WM: KWin (Wayland)

Phase 1: Encryption

The recent system assembly was a highly successful and engaging project, with a primary focus on security enhancement. This emphasis led to the crucial decision to mandate LUKS (Linux Unified Key Setup) encryption across all storage drives. This robust measure ensures full-disk encryption, making data at rest inaccessible without the correct passphrase, even in the event of physical compromise or theft of the drives.

This level of security introduces a key operational point concerning the boot process. Since the drives are encrypted, the system must pause and prompt for the decryption key before the operating system (OS) can complete its boot sequence and access the file system.

Consequently, this setup requires familiarity with configuring/etc/fstab. This file dictates how disk partitions and other block devices are mounted into the file system. In this encrypted environment, it is essential to correctly configure the fstab entries to manage the LUKS volumes. Specifically, the OS must be able to recognize, unlock (following successful pre-boot authentication), and subsequently mount the encrypted file systems normally post-boot. This manual step is critical to ensuring seamless, persistent data access once the initial security barrier is cleared.

While getting this configuration right was time-consuming, the final outcome perfectly matches the desired security goal. The only trade-off is the inability to perform a simple remote reboot and return to a running state; due to hardware-level encryption, a manual login is required for the boot process to complete.



Phase 2: Operating System Build and Configuration

The second phase of this deployment, the actual installation and configuration of the Operating System, was generally a smooth process for a custom build. Despite the overall ease, a few expected hardware-related hurdles surfaced, primarily involving the wireless networking components.

The most significant challenge involved the integration and stability of the system's Wi-Fi card. This issue was compounded by the physical constraints of the operating environment. Specifically, workstation location, infrastructure, and layout made it completely infeasible to establish a reliable, hardwired Ethernet connection to the main router. While I recognize that a direct, wired connection would have been the most robust and efficient option for a system of this nature, project success often requires pragmatic flexibility. Given the pre-existing spatial limitations, the decision was made to proceed with the wireless solution.

This choice, while not ideal from a pure performance standpoint, was a necessary compromise to meet the project's timeline and budget constraints within the existing restaurant's structure. The system was successfully configured to operate stably over the Wi-Fi network, ensuring all necessary services and applications could run effectively despite the inherent bandwidth and latency limitations of a wireless setup.



Phase 3: Virtual Machines - Implementation and Rationale

The design for this phase required deploying several virtual machines (VMs). The critical technical challenge was selecting a suitable hypervisor for hosting this VM directly on the primary workstation. After careful consideration, I made the pragmatic decision to employ Oracle VirtualBox.

The choice of VirtualBox was primarily driven by established proficiency; it is a tool with which I have significant familiarity and experience, ensuring a smoother, more efficient deployment process. While I am fully cognizant of more technically advanced, often higher-performing alternatives, such as those that leverage hardware virtualization natively at the kernel level—an excellent example being virt-manager on Linux systems—a deliberate decision was made to prioritize expediency and reliability for this specific project iteration.

The recognized advantage of hypervisors like virt-manager is their ability to use the native operating system's kernel for direct access to hardware resources, which typically translates into superior I/O performance and reduced overhead compared to Type 2 hypervisors like VirtualBox. However, integrating and mastering a new hypervisor environment would have introduced an unavoidable time sink and a steeper learning curve, diverting resources away from the core project objectives.


Therefore, the exploration, benchmarking, and implementation of high-performance, kernel-integrated virtualization platforms have been designated for a separate, dedicated project. For the current needs, VirtualBox is sufficient, as it provides the necessary feature set and stability to meet all established requirements without adding unnecessary complexity. It should also be noted that the current load on these systems is extremely low.


Phase 4: VM Builds.

This final stage of the project necessitates the comprehensive buildout and configuration of several Linux distributions. Initially, my efforts resulted in approximately 12 separate virtual machines. This broad initial selection was crucial for testing compatibility within the specific virtualized environment. Following an extensive testing phase, it became necessary to rigorously assess and remove operating systems that exhibited persistent instability, performance issues, or inherent incompatibilities in the VM environment. It was determined that some distributions, due to their kernel requirements or hardware-specific drivers, would perform optimally only on native hardware, leading to their exclusion from this virtualized testing suite. Based on these critical assessments and optimizations, I have now settled on a refined, stable, and highly functional collection of the following core Linux systems.

  • Lubuntu
  • Kubuntu
  • MX Linux
  • OpenSUSE (TumbleWeed)
  • Linux Light
  • Manjaro
  • Debian 13
  • Bodhi Linux
  • Ubuntu Server (LTS)
Due to RAM and CPU limitations, I can only run 4 plus the Main OS at the same time. At some point, I will migrate these systems to a proper hypervisor such as Proxmox. The current system is operating under significant resource constraints, specifically limited RAM and CPU capacity. This restriction limits the number of virtual machines (VMs) running simultaneously to four, in addition to the host or Main Operating System (OS).


Final Thoughts.


This complex project took over two months, focusing entirely on rigorous testing and careful rebuilding to achieve a highly stable and reliable user experience. While current achievements are positive, there is a significant opportunity for further refinement. The key takeaway is that true optimization requires active engagement and practical execution—doing the work—not just stating the intention. This initial success stems directly from the philosophy of action over mere intent.

- TheMacRat