updated smaller collections for manual
This commit is contained in:
@@ -0,0 +1,66 @@
|
||||
## Farmers providing transit for Tenant Networks (TN or Network)
|
||||
|
||||
For networks of a user to be reachable, these networks need penultimate Network resources that act as exit nodes for the WireGuard mesh.
|
||||
|
||||
For that Users need to sollicit a routable network with farmers that provide such a service.
|
||||
|
||||
### Global registry for network resources. (`GRNR`?)
|
||||
|
||||
Threefold through BCDB shoud keep a store where Farmers can register also a network service for Tenant Network (TN) reachablility.
|
||||
|
||||
In a network transaction the first thing asked should be where a user wants to purchase it's transit. That can be with a nearby (latency or geolocation) Exit Provider (can e.g. be a Farmer), or with a Exit Provider outside of the geolocation for easier routing towards the primary entrypoint. (VPN-like services coming to mind)
|
||||
|
||||
With this, we could envision in a later stage to have the Network Resources to be IPv6 multihomed with policy-based routing. That adds the possibiltiy to have multiple exit nodes for the same Network, with different IPv6 routes to them.
|
||||
|
||||
### Datastructure
|
||||
|
||||
A registered Farmer can also register his (dc-located?) network to be sold as transit space. For that he registers:
|
||||
- the IPv4 addresses that can be allocated to exit nodes.
|
||||
- the IPv6 prefix he obtained to be used in the Grid
|
||||
- the nodes that will serve as exit nodes.
|
||||
These nodes need to have IPv[46] access to routable address space through:
|
||||
- Physical access in an interface of the node
|
||||
- Access on a public `vlan` or via `vxlan / mpls / gre`
|
||||
|
||||
Together with the registered nodes that will be part of that Public segment, the TNoDB (BCDB) can verify a Network Object containing an ExitPoint for a Network and add it to the queue for ExitNodes to fetch and apply.
|
||||
|
||||
Physcally Nodes can be connected in several ways:
|
||||
- living directly on the Internet (with a routable IPv4 and/or IPv6 Address) without Provider-enforced firewalling (outgoing traffic only)
|
||||
- having an IPv4 allocation --and-- and IPv6 allocation
|
||||
- having a single IPv4 address --and-- a single IPv6 allocation (/64) or even (Oh God Why) a single IPv6 addr.
|
||||
- living in a Farm that has Nodes only reachable through NAT for IPv4 and no IPv6
|
||||
- living in a Farm that has NAT IPv4 and routable IPv6 with an allocation
|
||||
- living in a single-segment having IPv4 RFC1918 and only one IPv6 /64 prefix (home Nodes mostly)
|
||||
|
||||
#### A Network resource allocation.
|
||||
We define Network Resource (NR) as a routable IPv6 `/64` Prefix, so for every time a new TNo is generated and validated, containing a new serial number and an added/removed NR, there has been a request to obtain a valid IPv6 Prefix (/64) to be added to the TNo.
|
||||
|
||||
Basically it's just a list of allocations in that prefix, that are in use. Any free Prefix will do, as we do routing in the exit nodes with a `/64` granularity.
|
||||
|
||||
The TNoDB (BCDB) then validates/updates the Tenant Network object with that new Network Resource and places it on a queue to be fetched by the interested Nodes.
|
||||
|
||||
#### The Nodes responsible for ExitPoints
|
||||
|
||||
A Node responsible for ExitPoints as wel as a Public endpoint will know so because of how it's registered in the TNoDB (BCDB). That is :
|
||||
- it is defined as an exit node
|
||||
- the TNoDB hands out an Object that describes it's public connectivity. i.e. :
|
||||
- the public IPv4 address(es) it can use
|
||||
- the IPv6 Prefix in the network segment that contains the penultimate default route
|
||||
- an eventual Private BGP AS number for announcing the `/64` Prefixes of a Tenant Network, and the BGP peer(s).
|
||||
|
||||
With that information, a Node can then build the Network Namespace from which it builds the Wireguard Interfaces prior to sending them in the ExitPoint Namespace.
|
||||
|
||||
So the TNoDB (BCDB) hands out
|
||||
- Tenant Network Objects
|
||||
- Public Interface Objects
|
||||
|
||||
They are related :
|
||||
- A Node can have Network Resources
|
||||
- A Network Resource can have (1) Public Interface
|
||||
- Both are part of a Tenant Network
|
||||
|
||||
A TNo defines a Network where ONLY the ExitPoint is flagged as being one. No more.
|
||||
When the Node (networkd) needs to setup a Public node, it will need to act differently.
|
||||
- Verify if the Node is **really** public, if so use standard WG interface setup
|
||||
- If not, verify if there is already a Public Exit Namespace defined, create WG interface there.
|
||||
- If there is Public Exit Namespace, request one, and set it up first.
|
@@ -0,0 +1,264 @@
|
||||
# Network
|
||||
|
||||
- [How does a farmer configure a node as exit node](#How-does-a-farmer-configure-a-node-as-exit-node)
|
||||
- [How to create a user private network](#How-to-create-a-user-private-network)
|
||||
|
||||
## How does a farmer configure a node as exit node
|
||||
|
||||
For the network of the grid to work properly, some of the nodes in the grid need to be configured as "exit nodes". An "exit node" is a node that has a publicly accessible IP address and that is responsible routing IPv6 traffic, or proxy IPv4 traffic.
|
||||
|
||||
A farmer that wants to configure one of his nodes as "exit node", needs to register it in the TNODB. The node will then automatically detect it has been configured to be an exit node and do the necessary network configuration to start acting as one.
|
||||
|
||||
At the current state of the development, we have a [TNODB mock](../../tools/tnodb_mock) server and a [tffarmer CLI](../../tools/tffarm) tool that can be used to do these configuration.
|
||||
|
||||
Here is an example of how a farmer could register one of his node as "exit node":
|
||||
|
||||
1. Farmer needs to create its farm identity
|
||||
|
||||
```bash
|
||||
tffarmer register --seed myfarm.seed "mytestfarm"
|
||||
Farm registered successfully
|
||||
Name: mytestfarm
|
||||
Identity: ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E=
|
||||
```
|
||||
|
||||
2. Boot your nodes with your farm identity specified in the kernel parameters.
|
||||
|
||||
Take that farm identity create at step 1 and boot your node with the kernel parameters `farmer_id=<identity>`
|
||||
|
||||
for your test farm that would be `farmer_id=ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E=`
|
||||
|
||||
Once the node is booted, it will automatically register itself as being part of your farm into the [TNODB](../../tools/tnodb_mock) server.
|
||||
|
||||
You can verify that you node registered itself properly by listing all the node from the TNODB by doing a GET request on the `/nodes` endpoints:
|
||||
|
||||
```bash
|
||||
curl http://tnodb_addr/nodes
|
||||
[{"node_id":"kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=","farm_id":"ZF6jtCblLhTgAqp2jvxKkOxBgSSIlrRh1mRGiZaRr7E=","Ifaces":[]}]
|
||||
```
|
||||
|
||||
3. Farmer needs to specify its public allocation range to the TNODB
|
||||
|
||||
```bash
|
||||
tffarmer give-alloc 2a02:2788:0000::/32 --seed myfarm.seed
|
||||
prefix registered successfully
|
||||
```
|
||||
|
||||
4. Configure the public interface of the exit node if needed
|
||||
|
||||
In this step the farmer will tell his node how it needs to connect to the public internet. This configuration depends on the farm network setup, this is why this is up to the farmer to provide the detail on how the node needs to configure itself.
|
||||
|
||||
In a first phase, we create the internet access in 2 ways:
|
||||
|
||||
- the node is fully public: you don't need to configure a public interface, you can skip this step
|
||||
- the node has a management interface and a nic for public
|
||||
then `configure-public` is required, and the farmer has the public interface connected to a specific public segment with a router to the internet in front.
|
||||
|
||||
```bash
|
||||
tffarmer configure-public --ip 172.20.0.2/24 --gw 172.20.0.1 --iface eth1 kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=
|
||||
#public interface configured on node kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=
|
||||
```
|
||||
|
||||
We still need to figure out a way to get the routes properly installed, we'll do static on the toplevel router for now to do a demo.
|
||||
|
||||
The node is now configured to be used as an exit node.
|
||||
|
||||
5. Mark a node as being an exit node
|
||||
|
||||
The farmer then needs to select which node he agrees to use as an exit node for the grid
|
||||
|
||||
```bash
|
||||
tffarmer select-exit kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA=
|
||||
#Node kV3u7GJKWA7Js32LmNA5+G3A0WWnUG9h+5gnL6kr6lA= marked as exit node
|
||||
```
|
||||
|
||||
## How to create a user private network
|
||||
|
||||
1. Choose an exit node
|
||||
2. Request an new allocation from the farm of the exit node
|
||||
- a GET request on the tnodb_mock at `/allocations/{farm_id}` will give you a new allocation
|
||||
3. Creates the network schema
|
||||
|
||||
Steps 1 and 2 are easy enough to be done even manually but step 3 requires a deep knowledge of how networking works
|
||||
as well as the specific requirement of 0-OS network system.
|
||||
This is why we provide a tool that simplify this process for you, [tfuser](../../tools/tfuser).
|
||||
|
||||
Using tfuser creating a network becomes trivial:
|
||||
|
||||
```bash
|
||||
# creates a new network with node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk as exit node
|
||||
# and output the result into network.json
|
||||
tfuser generate --schema network.json network create --node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk
|
||||
```
|
||||
|
||||
network.json will now contains something like:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "",
|
||||
"tenant": "",
|
||||
"reply-to": "",
|
||||
"type": "network",
|
||||
"data": {
|
||||
"network_id": "J1UHHAizuCU6s9jPax1i1TUhUEQzWkKiPhBA452RagEp",
|
||||
"resources": [
|
||||
{
|
||||
"node_id": {
|
||||
"id": "DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk",
|
||||
"farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi",
|
||||
"reachability_v4": "public",
|
||||
"reachability_v6": "public"
|
||||
},
|
||||
"prefix": "2001:b:a:8ac6::/64",
|
||||
"link_local": "fe80::8ac6/64",
|
||||
"peers": [
|
||||
{
|
||||
"type": "wireguard",
|
||||
"prefix": "2001:b:a:8ac6::/64",
|
||||
"Connection": {
|
||||
"ip": "2a02:1802:5e::223",
|
||||
"port": 1600,
|
||||
"key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=",
|
||||
"private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662"
|
||||
}
|
||||
}
|
||||
],
|
||||
"exit_point": true
|
||||
}
|
||||
],
|
||||
"prefix_zero": "2001:b:a::/64",
|
||||
"exit_point": {
|
||||
"ipv4_conf": null,
|
||||
"ipv4_dnat": null,
|
||||
"ipv6_conf": {
|
||||
"addr": "fe80::8ac6/64",
|
||||
"gateway": "fe80::1",
|
||||
"metric": 0,
|
||||
"iface": "public"
|
||||
},
|
||||
"ipv6_allow": []
|
||||
},
|
||||
"allocation_nr": 0,
|
||||
"version": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Which is a valid network schema. This network only contains a single exit node though, so not really useful.
|
||||
Let's add another node to the network:
|
||||
|
||||
```bash
|
||||
tfuser generate --schema network.json network add-node --node 4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4
|
||||
```
|
||||
|
||||
result looks like:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "",
|
||||
"tenant": "",
|
||||
"reply-to": "",
|
||||
"type": "network",
|
||||
"data": {
|
||||
"network_id": "J1UHHAizuCU6s9jPax1i1TUhUEQzWkKiPhBA452RagEp",
|
||||
"resources": [
|
||||
{
|
||||
"node_id": {
|
||||
"id": "DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk",
|
||||
"farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi",
|
||||
"reachability_v4": "public",
|
||||
"reachability_v6": "public"
|
||||
},
|
||||
"prefix": "2001:b:a:8ac6::/64",
|
||||
"link_local": "fe80::8ac6/64",
|
||||
"peers": [
|
||||
{
|
||||
"type": "wireguard",
|
||||
"prefix": "2001:b:a:8ac6::/64",
|
||||
"Connection": {
|
||||
"ip": "2a02:1802:5e::223",
|
||||
"port": 1600,
|
||||
"key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=",
|
||||
"private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "wireguard",
|
||||
"prefix": "2001:b:a:b744::/64",
|
||||
"Connection": {
|
||||
"ip": "<nil>",
|
||||
"port": 0,
|
||||
"key": "3auHJw3XHFBiaI34C9pB/rmbomW3yQlItLD4YSzRvwc=",
|
||||
"private_key": "96dc64ff11d05e8860272b91bf09d52d306b8ad71e5c010c0ccbcc8d8d8f602c57a30e786d0299731b86908382e4ea5a82f15b41ebe6ce09a61cfb8373d2024c55786be3ecad21fe0ee100339b5fa904961fbbbd25699198c1da86c5"
|
||||
}
|
||||
}
|
||||
],
|
||||
"exit_point": true
|
||||
},
|
||||
{
|
||||
"node_id": {
|
||||
"id": "4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4",
|
||||
"farmer_id": "7koUE4nRbdsqEbtUVBhx3qvRqF58gfeHGMRGJxjqwfZi",
|
||||
"reachability_v4": "hidden",
|
||||
"reachability_v6": "hidden"
|
||||
},
|
||||
"prefix": "2001:b:a:b744::/64",
|
||||
"link_local": "fe80::b744/64",
|
||||
"peers": [
|
||||
{
|
||||
"type": "wireguard",
|
||||
"prefix": "2001:b:a:8ac6::/64",
|
||||
"Connection": {
|
||||
"ip": "2a02:1802:5e::223",
|
||||
"port": 1600,
|
||||
"key": "PK1L7n+5Fo1znwD/Dt9lAupL19i7a6zzDopaEY7uOUE=",
|
||||
"private_key": "9220e4e29f0acbf3bd7ef500645b78ae64b688399eb0e9e4e7e803afc4dd72418a1c5196208cb147308d7faf1212758042f19f06f64bad6ffe1f5ed707142dc8cc0a67130b9124db521e3a65e4aee18a0abf00b6f57dd59829f59662"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "wireguard",
|
||||
"prefix": "2001:b:a:b744::/64",
|
||||
"Connection": {
|
||||
"ip": "<nil>",
|
||||
"port": 0,
|
||||
"key": "3auHJw3XHFBiaI34C9pB/rmbomW3yQlItLD4YSzRvwc=",
|
||||
"private_key": "96dc64ff11d05e8860272b91bf09d52d306b8ad71e5c010c0ccbcc8d8d8f602c57a30e786d0299731b86908382e4ea5a82f15b41ebe6ce09a61cfb8373d2024c55786be3ecad21fe0ee100339b5fa904961fbbbd25699198c1da86c5"
|
||||
}
|
||||
}
|
||||
],
|
||||
"exit_point": false
|
||||
}
|
||||
],
|
||||
"prefix_zero": "2001:b:a::/64",
|
||||
"exit_point": {
|
||||
"ipv4_conf": null,
|
||||
"ipv4_dnat": null,
|
||||
"ipv6_conf": {
|
||||
"addr": "fe80::8ac6/64",
|
||||
"gateway": "fe80::1",
|
||||
"metric": 0,
|
||||
"iface": "public"
|
||||
},
|
||||
"ipv6_allow": []
|
||||
},
|
||||
"allocation_nr": 0,
|
||||
"version": 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Our network schema is now ready, but before we can provision it onto a node, we need to sign it and send it to the bcdb.
|
||||
To be able to sign it we need to have a pair of key. You can use `tfuser id` command to create an identity:
|
||||
|
||||
```bash
|
||||
tfuser id --output user.seed
|
||||
```
|
||||
|
||||
We can now provision the network on both nodes:
|
||||
|
||||
```bash
|
||||
tfuser provision --schema network.json \
|
||||
--node DLFF6CAshvyhCrpyTHq1dMd6QP6kFyhrVGegTgudk6xk \
|
||||
--node 4hpUjrbYS4YeFbvLoeSR8LGJKVkB97JyS83UEhFUU3S4 \
|
||||
--seed user.seed
|
||||
```
|
@@ -0,0 +1,54 @@
|
||||
#!/usr/bin/bash
|
||||
|
||||
mgmtnic=(
|
||||
0c:c4:7a:51:e3:6a
|
||||
0c:c4:7a:51:e9:e6
|
||||
0c:c4:7a:51:ea:18
|
||||
0c:c4:7a:51:e3:78
|
||||
0c:c4:7a:51:e7:f8
|
||||
0c:c4:7a:51:e8:ba
|
||||
0c:c4:7a:51:e8:0c
|
||||
0c:c4:7a:51:e7:fa
|
||||
)
|
||||
|
||||
ipminic=(
|
||||
0c:c4:7a:4c:f3:b6
|
||||
0c:c4:7a:4d:02:8c
|
||||
0c:c4:7a:4d:02:91
|
||||
0c:c4:7a:4d:02:62
|
||||
0c:c4:7a:4c:f3:7e
|
||||
0c:c4:7a:4d:02:98
|
||||
0c:c4:7a:4d:02:19
|
||||
0c:c4:7a:4c:f2:e0
|
||||
)
|
||||
cnt=1
|
||||
for i in ${mgmtnic[*]} ; do
|
||||
cat << EOF
|
||||
config host
|
||||
option name 'zosv2tst-${cnt}'
|
||||
option dns '1'
|
||||
option mac '${i}'
|
||||
option ip '10.5.0.$((${cnt} + 10))'
|
||||
|
||||
EOF
|
||||
let cnt++
|
||||
done
|
||||
|
||||
|
||||
|
||||
cnt=1
|
||||
for i in ${ipminic[*]} ; do
|
||||
cat << EOF
|
||||
config host
|
||||
option name 'ipmiv2tst-${cnt}'
|
||||
option dns '1'
|
||||
option mac '${i}'
|
||||
option ip '10.5.0.$((${cnt} + 100))'
|
||||
|
||||
EOF
|
||||
let cnt++
|
||||
done
|
||||
|
||||
for i in ${mgmtnic[*]} ; do
|
||||
echo ln -s zoststconf 01-$(echo $i | sed s/:/-/g)
|
||||
done
|
Reference in New Issue
Block a user