updated website with tech content

This commit is contained in:
mik-tf
2025-01-16 10:35:26 -05:00
parent f7d55dc687
commit 94b853dab2
62 changed files with 566 additions and 2579 deletions

View File

@@ -0,0 +1,7 @@
{
"label": "The Cloud Today",
"position": 2,
"link": {
"type": "generated-index",
}
}

19
docs/cloud_today/c64.md Normal file
View File

@@ -0,0 +1,19 @@
---
title: History of Computers
sidebar_position: 3
---
## Hardware Is No Longer Used Efficiently
The IT world fails to harness the full potential of computer hardware.
![Commodore 64](../img/c64.png)
While hardware advancements have surged forward, user experiences and features have often stagnated, failing to keep pace with these developments.
The original Commodore 64, with only 64 KB of memory, was a remarkably capable machine for its time. In contrast, today's computers boast 8 GB or more of memory, yet their capabilities have not necessarily improved proportionately.
This highlights a regression in our ability to fully utilize computer hardware.
We are committed to bridging this gap by optimizing our approach to hardware utilization, thereby unlocking its full potential. 

View File

@@ -0,0 +1,56 @@
---
title: The Internet Today
sidebar_position: 2
---
# Rethinking the Internet
**The Three Layers Of The Internet**
![](../img/3layers.png)
The Internet is made up of 3 layers:
1. Compute & Storage: this is where applications are being served from. Currently this system is highly centralized and runs from large data centers (see below).
2. Network: this is the ability for information to travel and it can be wireless, via cables (fiber) or satellite links etc. Currently information needs to travel very far and for most countries very little information is stored locally. A handful of companies own more than 80% of the current Internet's network capacity.
3. Applications: currently applications are hosted in huge data centers using the compute and storage as provided. This system is too centralized and therefore very vulnerable.
ThreeFold is providing solutions for the first 2 layers and allows everyone else to build on top.
**Current Challenges**
Digital information mainly travels over large fiber backbone links as pictured here.
![](../img/global_net.png)
The Internet as we know it has significantly diverged from its original intent. If 2 people in e.g. Zanzibar (an Island in Africa) use Zoom with each other then the information will travel from Zanzibar to a large European datacenter where the Zoom servers are being hosted and back again.
This leads to very inneficient behavior, slower performance, less reliability and a cost which is higher than what it should be.
![](../img/network_path.png)
**Issues with Autonomy and Sovereignty**
Our current internet model compromises autonomy and sovereignty. Most data is stored in large data centers controlled by a few major corporations, effectively turning users into products.
![alt text](../img/we_are_products.png)
Moreover, the internet is replicated many times across various applications, each requiring its own full infrastructure. This approach is unsustainable and inefficient.
## The ThreeFold Cloud Engine resolves quite a lot of those issues
ThreeFold resolves
- reliability for data, data can never be corrupted nor lost
- reliability for network, connectivity should always be possible
- sovereignity
- scalability
- security
- locality
- cost
- management (easier to scale)

View File

@@ -0,0 +1,43 @@
---
title: Too Many Layers
sidebar_position: 4
---
# Layers
![](../img/layers.png)
Too many abstraction layers results in bad efficiency, performance loss, increased management costs, and scalability challenges.
This is due to a number of reasons.
![](../img/fourreasons.png)
In the context of CPU scheduling in Linux (and in most modern operating systems), a context switch refers to the process of saving the state of a currently running process (such as its registers, program counter, and other relevant information) and loading the state of a different process to allow it to run. This switching of execution from one process to another is a fundamental aspect of multitasking operating systems, where multiple processes share the CPU's time.
Here's how a context switch typically works in Linux:
1. **Interrupt Handling**: When a higher-priority process needs to run or an event requiring immediate attention occurs (such as I/O completion), the CPU interrupts the currently running process.
2. **Saving Context**: The operating system saves the state of the current process, including its registers, program counter, and other relevant data, into its process control block (PCB). This step ensures that when the process resumes execution later, it can continue from where it left off.
3. **Scheduling Decision**: The operating system scheduler determines which process should run next based on scheduling algorithms and the priority of processes in the system.
4. **Loading Context**: The operating system loads the state of the selected process from its PCB into the CPU, allowing it to execute. This includes restoring the process's registers, program counter, and other relevant data.
5. **Execution**: The newly loaded process begins executing on the CPU.
Context switches are essential for multitasking, but they come with overhead that can impact system performance:
1. **Time Overhead**: Context switches require time to save and restore process states, as well as to perform scheduling decisions. This overhead adds up, especially in systems with many processes frequently switching contexts.
2. **Cache Invalidation**: Each time a process is switched in, it may result in cache invalidation, where the CPU's cache needs to be refreshed with data from the new process's memory space. This can lead to cache misses and performance degradation.
3. **Resource Contentions**: Context switches can exacerbate resource contention issues, especially in systems with limited CPU cores. If multiple processes are frequently contending for CPU time, the overhead of context switches can further delay process execution.
4. **Fragmentation**: Frequent context switches can lead to memory fragmentation, as processes are loaded and unloaded into memory. This fragmentation can degrade system performance over time, as it becomes more challenging to find contiguous slice of memory for new processes.
While context switches are necessary for multitasking, excessive context switching can indeed lead to a significant loss of execution power by introducing overhead and resource contention in the system.
Therefore, efficient scheduling algorithms and optimization techniques are crucial for minimizing the impact of context switches on system performance.

View File

@@ -0,0 +1,34 @@
---
title: The Onion Analogy
sidebar_position: 1
---
![](../img/onion.jpg)
# Cloud Stacks: The Onion Analogy
Most cloud stacks can be compared to an onion, where each layer represents an additional component or service added to address a problem in the system. However, like peeling an onion, as you dig deeper, you often find that these layers are not solving the core issues but merely masking symptoms, leading to a complex and often fragile structure.
#### 1. **The Outer Layers: Quick Fixes and Additions**
- **Problem:** When an issue arises, such as performance bottlenecks or security vulnerabilities, organizations often add another tool, service, or layer to the cloud stack to mitigate the issue.
- **Analogy:** This is akin to applying a bandage or taking a painkiller when you feel pain. The immediate discomfort might be alleviated, but the underlying problem remains untouched.
#### 2. **The Middle Layers: Compounded Complexity**
- **Problem:** As more layers are added to solve different issues, the cloud stack becomes increasingly complicated. Each new layer interacts with the existing ones, often in unpredictable ways, leading to a system that is difficult to manage and troubleshoot.
- **Analogy:** Just like adding more painkillers to treat worsening symptoms, the system becomes dependent on these layers to function. However, this doesnt address the root cause of the issues; instead, it creates a reliance on temporary fixes that complicate the system further.
- **Example:** Security patches or monitoring tools are added after incidents of data breaches or unauthorized access. While these layers enhance security, they do not address the underlying issue of poor security practices in the original architecture, leading to a cloud stack that is more difficult to maintain and secure.
#### 3. **The Core: Root Causes Ignored**
- **Problem:** At the core of the onion, the fundamental issues often remain unaddressed. These could be poor initial design choices, lack of planning, or failure to align the cloud architecture with the businesss long-term needs.
- **Analogy:** Similar to how treating only the symptoms of an illness without addressing its cause can lead to recurring issues, adding layers to a cloud stack without fixing the root problems results in a cycle of ongoing maintenance, inefficiency, and potential failure.
- **Example:** If a cloud environment was initially set up without considering future scalability, each layer added to address scaling problems doesnt solve the underlying issue of an inflexible architecture. As the system grows, the layers pile up, making the system more cumbersome and fragile.
### Painkiller Approach: Treating Symptoms, Not Causes
This onion-like structure represents a "painkiller approach" to cloud management, where immediate issues are addressed with quick fixes rather than tackling the underlying problems. Over time, this approach leads to several challenges:
- **Cyber Pandemic** The Cyber Pandemic is real, added these layers leads to weak security.
- **Increased Complexity:** Each new layer adds complexity, making the system harder to understand and maintain.
- **Higher Costs:** More layers often mean more resources, licenses, and management overhead, increasing operational costs.
- **Reduced Agility:** The more complex the stack, the harder it is to make changes or adapt to new requirements, reducing the systems overall agility.
- **Fragility:** A stack built on temporary fixes is more prone to failures because the core issues are not resolved, making the system brittle.