Http proxy (#1772)

* adding skeleton for new http_proxy.

* more stuff yay

* more stuff yay

* more stuff

* more stuff

* "working" poc

* more stuff

* more stuff

* fix mored stuff

* working proxy

* moved to handler style for requests

* clean up

* cleaning stuff up

* more stuff

* refactoring code

* more changes

* refactoring handle

* refactored code

* more stuff

* Co-authored-by: J M <Blu-J@users.noreply.github.com>

* Co-authored-by: J M <Blu-J@users.noreply.github.com>

* more stuff

* more stuff

* working main ui handle

* Implement old code to handler in static server

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* remove recovered services and drop reordering feature (#1829)

* chore: Convert the migration to use receipt. (#1842)

* feat: remove ionic storage (#1839)

* feat: remove ionic storage

* grayscal when disconncted, rename local storage service for clarity

* remove storage from package lock

* update patchDB

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

* update patch DB

* workring http server

* Feat/community marketplace (#1790)

* add community marketplace

* Update embassy-mock-api.service.ts

* expect ui/marketplace to be undefined

* possible undefined from getpackage

* fix marketplace pages

* rework marketplace infrastructure

* fix bugs

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* remove unwrap

* cleanup code

* d

* more stuff

* fix: make `shared` module independent of `config.js` (#1870)

* cert stuff WIP

* MORE CERT STUFF

* more stuff

* more stuff

* more stuff

* abstract service fn

* almost ssl

* fix ssl export

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* remove recovered services and drop reordering feature (#1829)

* chore: Convert the migration to use receipt. (#1842)

* feat: remove ionic storage (#1839)

* feat: remove ionic storage

* grayscal when disconncted, rename local storage service for clarity

* remove storage from package lock

* update patchDB

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

* update patch DB

* Feat/community marketplace (#1790)

* add community marketplace

* Update embassy-mock-api.service.ts

* expect ui/marketplace to be undefined

* possible undefined from getpackage

* fix marketplace pages

* rework marketplace infrastructure

* fix bugs

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* fix: make `shared` module independent of `config.js` (#1870)

* fix: small fix for marketplace header styles (#1873)

* feat: setup publishing of share and marketplace packages (#1874)

* inlcude marketplace in linter

* fix npm publish scrips and bump versions of libs

* feat: add assets to published packages and fix peerDeps versions (#1875)

* bump peer dep

* fix: add assets to published paths (#1876)

* allow ca download over lan (#1877)

* only desaturate when logged in and not fully

* Feature/multi platform (#1866)

* wip

* wip

* wip

* wip

* wip

* wip

* remove debian dir

* lazy env and git hash

* remove env and git hash on clean

* don't leave project dir

* use docker for native builds

* start9 rust

* correctly mount registry

* remove systemd config

* switch to /usr/bin

* disable sound for now

* wip

* change disk list

* multi-arch images

* multi-arch system images

* default aarch64

* edition 2021

* dynamic wifi interface name

* use wifi interface from config

* bugfixes

* add beep based sound

* wip

* wip

* wip

* separate out raspberry pi specific files

* fixes

* use new initramfs always

* switch journald conf to sed script

* fixes

* fix permissions

* talking about kernel modules not scripts

* fix

* fix

* switch to MBR

* install to /usr/lib

* fixes

* fixes

* fixes

* fixes

* add media config to cfg path

* fixes

* fixes

* fixes

* raspi image fixes

* fix test

* fix workflow

* sync boot partition

* gahhhhh

* more stuff

* remove restore warning and better messaging for backup/restore (#1881)

* Update READMEs (#1885)

* docs

* fix host key generation

* debugging eos with tokio console

* fix recursive default

* build improvements (#1886)

* build improvements

* no workdir

* kiosk fully working

* setup profile prefs

* Feat/js long running (#1879)

* wip: combining the streams

* chore: Testing locally

* chore: Fix some lint

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* wip: making the mananger create

* wip: Working on trying to make the long running docker container command

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* feat: Use the long running feature in the manager

* remove recovered services and drop reordering feature (#1829)

* wip: Need to get the initial docker command running?

* chore: Add in the new procedure for the docker.

* feat: Get the system to finally run long

* wip: Added the command inserter to the docker persistance

* wip: Added the command inserter to the docker persistance

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* remove recovered services and drop reordering feature (#1829)

* chore: Convert the migration to use receipt. (#1842)

* feat: remove ionic storage (#1839)

* feat: remove ionic storage

* grayscal when disconncted, rename local storage service for clarity

* remove storage from package lock

* update patchDB

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

* update patchDB

* feat: Move the run_command into the js

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* remove recovered services and drop reordering feature (#1829)

* chore: Convert the migration to use receipt. (#1842)

* feat: remove ionic storage (#1839)

* feat: remove ionic storage

* grayscal when disconncted, rename local storage service for clarity

* remove storage from package lock

* update patchDB

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

* update patch DB

* chore: Change the error catching for the long running to try all

* Feat/community marketplace (#1790)

* add community marketplace

* Update embassy-mock-api.service.ts

* expect ui/marketplace to be undefined

* possible undefined from getpackage

* fix marketplace pages

* rework marketplace infrastructure

* fix bugs

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* WIP: Fix the build, needed to move around creation of exec

* wip: Working on solving why there is a missing end.

* fix: make `shared` module independent of `config.js` (#1870)

* feat: Add in the kill and timeout

* feat: Get the run to actually work.

* chore: Add when/ why/ where comments

* feat: Convert inject main to use exec main.

* Fix: Ability to stop services

* wip: long running js main

* feat: Kill for the main

* Fix

* fix: Fix the build for x86

* wip: Working on changes

* wip: Working on trying to kill js

* fix: Testing for slow

* feat: Test that the new manifest works

* chore: Try and fix build?

* chore: Fix? the build

* chore: Fix the long input dies and never restarts

* build improvements

* no workdir

* fix: Architecture for long running

* chore: Fix and remove the docker inject

* chore: Undo the changes to the kiosk mode

* fix: Remove the it from the prod build

* fix: Start issue

* fix: The compat build

* chore: Add in the conditional compilation again for the missing impl

* chore: Change to aux

* chore: Remove the aux for now

* chore: Add some documentation to docker container

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>

* use old resolv.conf until systemd is on

* update patchdb

* update patch db submodule

* no x11 wrapper config

* working poc

* fixing misc stuff

* switch patchdb to next

* Feat/update tab (#1865)

* implement updates tab for viewing all updates from all marketplaces in one place

* remove auto-check-updates

* feat: implement updates page (#1888)

* feat: implement updates page

* chore: comments

* better styling in update tab

* rework marketplace service (#1891)

* rework marketplace service

* remove unneeded ?

* fix: refactor marketplace to cache requests

Co-authored-by: waterplea <alexander@inkin.ru>

Co-authored-by: Alex Inkin <alexander@inkin.ru>

* misc fixes (#1894)

* changing hostname stuff

* changes

* move marketplace settings into marketplace tab (#1895)

* move marketplace settings into marketplace tab

* Update frontend/projects/ui/src/app/modals/marketplace-settings/marketplace-settings.page.ts

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* bump marketplace version

* removing oldd code

* working service proxy

* fqdn struct wwip

* new types for ssl proxy

* restructure restore.rs and embassyd.rs

* adding dbg

* debugging proxy handlers

* add lots of debugging for the svc handler removal bug

* debugging

* remove extra code

* fixing proxy and removing old debug code

* finalizing proxy code to serve the setup ui and diag ui

* final new eos http proxy

* remove uneeded trace error

* remove extra file

* not needed flags

* clean up

* Fix/debug (#1909)

chore: Use debug by default"

* chore: Fix on the rsync not having stdout. (#1911)

* install wizard project (#1893)

* install wizard project

* reboot endpoint

* Update frontend/projects/install-wizard/src/app/pages/home/home.page.ts

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* Update frontend/projects/install-wizard/src/app/pages/home/home.page.ts

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* Update frontend/projects/install-wizard/src/app/pages/home/home.page.ts

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* update build

* fix build

* backend portion

* increase image size

* loaded

* dont auto resize

* fix install wizard

* use localhost if still in setup mode

* fix compat

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>

* fix kiosk

* integrate install wizard

* fix build typo

* no nginx

* fix build

* remove nginx stuff from build

* fixes

Co-authored-by: Stephen Chavez <stephen@start9labs.com>
Co-authored-by: J M <2364004+Blu-J@users.noreply.github.com>
Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
This commit is contained in:
Stephen Chavez
2022-11-09 05:30:26 +00:00
committed by Aiden McClelland
parent d215d96b9b
commit 0fc546962e
38 changed files with 2130 additions and 1410 deletions

View File

@@ -0,0 +1,216 @@
use std::collections::BTreeMap;
use std::io;
use std::pin::Pin;
use std::str::FromStr;
use std::sync::{Arc, RwLock};
use std::task::{Context, Poll};
use color_eyre::eyre::eyre;
use futures::{ready, Future};
use hyper::server::accept::Accept;
use hyper::server::conn::{AddrIncoming, AddrStream};
use openssl::pkey::{PKey, Private};
use openssl::x509::X509;
use tokio::io::{AsyncRead, AsyncWrite, ReadBuf};
use tokio_rustls::rustls::server::ResolvesServerCert;
use tokio_rustls::rustls::sign::{any_supported_type, CertifiedKey};
use tokio_rustls::rustls::{Certificate, PrivateKey, ServerConfig};
use crate::net::net_utils::ResourceFqdn;
use crate::Error;
enum State {
Handshaking(tokio_rustls::Accept<AddrStream>),
Streaming(tokio_rustls::server::TlsStream<AddrStream>),
}
// tokio_rustls::server::TlsStream doesn't expose constructor methods,
// so we have to TlsAcceptor::accept and handshake to have access to it
// TlsStream implements AsyncRead/AsyncWrite handshaking tokio_rustls::Accept first
pub struct TlsStream {
state: State,
}
impl TlsStream {
fn new(stream: AddrStream, config: Arc<ServerConfig>) -> TlsStream {
let accept = tokio_rustls::TlsAcceptor::from(config).accept(stream);
TlsStream {
state: State::Handshaking(accept),
}
}
}
impl AsyncRead for TlsStream {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context,
buf: &mut ReadBuf,
) -> Poll<io::Result<()>> {
let pin = self.get_mut();
match pin.state {
State::Handshaking(ref mut accept) => match ready!(Pin::new(accept).poll(cx)) {
Ok(mut stream) => {
let result = Pin::new(&mut stream).poll_read(cx, buf);
pin.state = State::Streaming(stream);
result
}
Err(err) => Poll::Ready(Err(err)),
},
State::Streaming(ref mut stream) => Pin::new(stream).poll_read(cx, buf),
}
}
}
impl AsyncWrite for TlsStream {
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<io::Result<usize>> {
let pin = self.get_mut();
match pin.state {
State::Handshaking(ref mut accept) => match ready!(Pin::new(accept).poll(cx)) {
Ok(mut stream) => {
let result = Pin::new(&mut stream).poll_write(cx, buf);
pin.state = State::Streaming(stream);
result
}
Err(err) => Poll::Ready(Err(err)),
},
State::Streaming(ref mut stream) => Pin::new(stream).poll_write(cx, buf),
}
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
match self.state {
State::Handshaking(_) => Poll::Ready(Ok(())),
State::Streaming(ref mut stream) => Pin::new(stream).poll_flush(cx),
}
}
fn poll_shutdown(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<io::Result<()>> {
match self.state {
State::Handshaking(_) => Poll::Ready(Ok(())),
State::Streaming(ref mut stream) => Pin::new(stream).poll_shutdown(cx),
}
}
}
impl ResolvesServerCert for EmbassyCertResolver {
fn resolve(
&self,
client_hello: tokio_rustls::rustls::server::ClientHello,
) -> Option<Arc<tokio_rustls::rustls::sign::CertifiedKey>> {
let hostname_raw = client_hello.server_name();
match hostname_raw {
Some(hostname_str) => {
let full_fqdn = match ResourceFqdn::from_str(hostname_str) {
Ok(fqdn) => fqdn,
Err(_) => {
tracing::error!("Error converting {} to fqdn struct", hostname_str);
return None;
}
};
let lock = self.cert_mapping.read();
match lock {
Ok(lock) => lock
.get(&full_fqdn)
.map(|cert_key| Arc::new(cert_key.to_owned())),
Err(err) => {
tracing::error!("resolve fn Error: {}", err);
None
}
}
}
None => None,
}
}
}
#[derive(Clone, Default)]
pub struct EmbassyCertResolver {
cert_mapping: Arc<RwLock<BTreeMap<ResourceFqdn, CertifiedKey>>>,
}
impl EmbassyCertResolver {
pub fn new() -> Self {
Self::default()
}
pub async fn add_certificate_to_resolver(
&mut self,
service_resource_fqdn: ResourceFqdn,
package_cert_data: (PKey<Private>, Vec<X509>),
) -> Result<(), Error> {
let x509_cert_chain = package_cert_data.1;
let private_keys = package_cert_data
.0
.private_key_to_der()
.map_err(|err| Error::new(eyre!("err {}", err), crate::ErrorKind::BytesError))?;
let mut full_rustls_certs = Vec::new();
for cert in x509_cert_chain.iter() {
let cert =
Certificate(cert.to_der().map_err(|err| {
Error::new(eyre!("err: {}", err), crate::ErrorKind::BytesError)
})?);
full_rustls_certs.push(cert);
}
let pre_sign_key = PrivateKey(private_keys);
let actual_sign_key = any_supported_type(&pre_sign_key)
.map_err(|err| Error::new(eyre!("{}", err), crate::ErrorKind::SignError))?;
let cert_key = CertifiedKey::new(full_rustls_certs, actual_sign_key);
let mut lock = self
.cert_mapping
.write()
.map_err(|err| Error::new(eyre!("{}", err), crate::ErrorKind::Network))?;
lock.insert(service_resource_fqdn, cert_key);
Ok(())
}
pub async fn remove_cert(&mut self, hostname: ResourceFqdn) -> Result<(), Error> {
let mut lock = self
.cert_mapping
.write()
.map_err(|err| Error::new(eyre!("{}", err), crate::ErrorKind::Network))?;
lock.remove(&hostname);
Ok(())
}
}
pub struct TlsAcceptor {
config: Arc<ServerConfig>,
incoming: AddrIncoming,
}
impl TlsAcceptor {
pub fn new(config: Arc<ServerConfig>, incoming: AddrIncoming) -> TlsAcceptor {
TlsAcceptor { config, incoming }
}
}
impl Accept for TlsAcceptor {
type Conn = TlsStream;
type Error = io::Error;
fn poll_accept(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Option<Result<Self::Conn, Self::Error>>> {
let pin = self.get_mut();
match ready!(Pin::new(&mut pin.incoming).poll_accept(cx)) {
Some(Ok(sock)) => Poll::Ready(Some(Ok(TlsStream::new(sock, pin.config.clone())))),
Some(Err(e)) => Poll::Ready(Some(Err(e))),
None => Poll::Ready(None),
}
}
}

View File

@@ -0,0 +1,173 @@
use std::collections::BTreeMap;
use std::net::{IpAddr, SocketAddr};
use std::sync::Arc;
use helpers::NonDetachingJoinHandle;
use http::StatusCode;
use hyper::server::conn::AddrIncoming;
use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Error as HyperError, Response, Server};
use tokio::sync::oneshot;
use tokio_rustls::rustls::ServerConfig;
use tracing::error;
use crate::net::cert_resolver::TlsAcceptor;
use crate::net::net_utils::{host_addr_fqdn, ResourceFqdn};
use crate::net::HttpHandler;
use crate::Error;
static RES_NOT_FOUND: &[u8] = b"503 Service Unavailable";
static NO_HOST: &[u8] = b"No host header found";
pub struct EmbassyServiceHTTPServer {
pub svc_mapping: Arc<tokio::sync::RwLock<BTreeMap<ResourceFqdn, HttpHandler>>>,
pub shutdown: oneshot::Sender<()>,
pub handle: NonDetachingJoinHandle<()>,
pub ssl_cfg: Option<Arc<ServerConfig>>,
}
impl EmbassyServiceHTTPServer {
pub async fn new(
listener_addr: IpAddr,
port: u16,
ssl_cfg: Option<Arc<ServerConfig>>,
) -> Result<Self, Error> {
let (tx, rx) = tokio::sync::oneshot::channel::<()>();
let listener_socket_addr = SocketAddr::from((listener_addr, port));
let server_service_mapping = Arc::new(tokio::sync::RwLock::new(BTreeMap::<
ResourceFqdn,
HttpHandler,
>::new()));
let server_service_mapping1 = server_service_mapping.clone();
let bare_make_service_fn = move || {
let server_service_mapping = server_service_mapping.clone();
async move {
Ok::<_, HyperError>(service_fn(move |req| {
let mut server_service_mapping = server_service_mapping.clone();
async move {
server_service_mapping = server_service_mapping.clone();
let host = host_addr_fqdn(&req);
match host {
Ok(host_uri) => {
let res = {
let mapping = server_service_mapping.read().await;
let opt_handler = mapping.get(&host_uri).cloned();
opt_handler
};
match res {
Some(opt_handler) => {
let response = opt_handler(req).await;
match response {
Ok(resp) => Ok::<Response<Body>, hyper::Error>(resp),
Err(err) => Ok(respond_hyper_error(err)),
}
}
None => Ok(res_not_found()),
}
}
Err(e) => Ok(no_host_found(e)),
}
}
}))
}
};
let inner_ssl_cfg = ssl_cfg.clone();
let handle = tokio::spawn(async move {
match inner_ssl_cfg {
Some(cfg) => {
let incoming = AddrIncoming::bind(&listener_socket_addr).unwrap();
let server = Server::builder(TlsAcceptor::new(cfg, incoming))
.http1_preserve_header_case(true)
.http1_title_case_headers(true)
.serve(make_service_fn(|_| bare_make_service_fn()))
.with_graceful_shutdown({
async {
rx.await.ok();
}
});
if let Err(e) = server.await {
error!("Spawning hyper server errorr: {}", e);
}
}
None => {
let server = Server::bind(&listener_socket_addr)
.http1_preserve_header_case(true)
.http1_title_case_headers(true)
.serve(make_service_fn(|_| bare_make_service_fn()))
.with_graceful_shutdown({
async {
rx.await.ok();
}
});
if let Err(e) = server.await {
error!("Spawning hyper server errorr: {}", e);
}
}
};
});
Ok(Self {
svc_mapping: server_service_mapping1,
handle: handle.into(),
shutdown: tx,
ssl_cfg,
})
}
pub async fn add_svc_handler_mapping(
&mut self,
fqdn: ResourceFqdn,
svc_handle: HttpHandler,
) -> Result<(), Error> {
let mut mapping = self.svc_mapping.write().await;
mapping.insert(fqdn.clone(), svc_handle);
Ok(())
}
pub async fn remove_svc_handler_mapping(&mut self, fqdn: ResourceFqdn) -> Result<(), Error> {
let mut mapping = self.svc_mapping.write().await;
mapping.remove(&fqdn);
Ok(())
}
}
/// HTTP status code 503
fn res_not_found() -> Response<Body> {
Response::builder()
.status(StatusCode::SERVICE_UNAVAILABLE)
.body(RES_NOT_FOUND.into())
.unwrap()
}
fn no_host_found(err: Error) -> Response<Body> {
let err_txt = format!("{}: Error {}", String::from_utf8_lossy(NO_HOST), err);
Response::builder()
.status(StatusCode::BAD_REQUEST)
.body(err_txt.into())
.unwrap()
}
fn respond_hyper_error(err: hyper::Error) -> Response<Body> {
let err_txt = format!("{}: Error {}", String::from_utf8_lossy(NO_HOST), err);
Response::builder()
.status(StatusCode::BAD_REQUEST)
.body(err_txt.into())
.unwrap()
}

View File

@@ -1,34 +1,31 @@
use std::net::{Ipv4Addr, SocketAddr};
use std::path::PathBuf;
use std::collections::BTreeMap;
use std::sync::Arc;
use openssl::pkey::{PKey, Private};
use openssl::x509::X509;
use patch_db::DbHandle;
use futures::future::BoxFuture;
use hyper::{Body, Client, Error as HyperError, Request, Response};
use indexmap::IndexSet;
use rpc_toolkit::command;
use sqlx::PgPool;
use torut::onion::{OnionAddressV3, TorSecretKeyV3};
use tracing::instrument;
use self::interface::{Interface, InterfaceId};
#[cfg(feature = "avahi")]
use self::mdns::MdnsController;
use self::nginx::NginxController;
use self::ssl::SslManager;
use self::tor::TorController;
use crate::hostname::get_hostname;
use crate::net::dns::DnsController;
use crate::net::interface::TorConfig;
use crate::net::nginx::InterfaceMetadata;
use crate::s9pk::manifest::PackageId;
use self::interface::InterfaceId;
use crate::net::interface::LanPortConfig;
use crate::util::serde::Port;
use crate::Error;
pub mod dns;
pub mod interface;
#[cfg(feature = "avahi")]
pub mod mdns;
pub mod nginx;
pub mod embassy_service_http_server;
pub mod net_controller;
pub mod net_utils;
pub mod proxy_controller;
pub mod ssl;
pub mod cert_resolver;
pub mod static_server;
pub mod tor;
pub mod vhost_controller;
pub mod wifi;
const PACKAGE_CERT_PATH: &str = "/var/lib/embassy/ssl";
@@ -38,161 +35,23 @@ pub fn net() -> Result<(), Error> {
Ok(())
}
#[derive(Default)]
struct PackageNetInfo {
interfaces: BTreeMap<InterfaceId, InterfaceMetadata>,
}
pub struct InterfaceMetadata {
pub fqdn: String,
pub lan_config: BTreeMap<Port, LanPortConfig>,
pub protocols: IndexSet<String>,
}
/// Indicates that the net controller has created the
/// SSL keys
#[derive(Clone, Copy)]
pub struct GeneratedCertificateMountPoint(());
pub struct NetController {
pub tor: TorController,
#[cfg(feature = "avahi")]
pub mdns: MdnsController,
pub nginx: NginxController,
pub ssl: SslManager,
pub dns: DnsController,
}
impl NetController {
#[instrument(skip(db, handle))]
pub async fn init<Db: DbHandle>(
embassyd_addr: SocketAddr,
embassyd_tor_key: TorSecretKeyV3,
tor_control: SocketAddr,
dns_bind: &[SocketAddr],
db: PgPool,
import_root_ca: Option<(PKey<Private>, X509)>,
handle: &mut Db,
) -> Result<Self, Error> {
let ssl = match import_root_ca {
None => SslManager::init(db, handle).await,
Some(a) => SslManager::import_root_ca(db, a.0, a.1).await,
}?;
pub type HttpHandler = Arc<
dyn Fn(Request<Body>) -> BoxFuture<'static, Result<Response<Body>, HyperError>> + Send + Sync,
>;
let hostname_receipts = crate::hostname::HostNameReceipt::new(handle).await?;
let hostname = get_hostname(handle, &hostname_receipts).await?;
Ok(Self {
tor: TorController::init(embassyd_addr, embassyd_tor_key, tor_control).await?,
#[cfg(feature = "avahi")]
mdns: MdnsController::init().await?,
nginx: NginxController::init(PathBuf::from("/etc/nginx"), &ssl, &hostname).await?,
ssl,
dns: DnsController::init(dns_bind).await?,
})
}
pub fn ssl_directory_for(pkg_id: &PackageId) -> PathBuf {
PathBuf::from(format!("{}/{}", PACKAGE_CERT_PATH, pkg_id))
}
#[instrument(skip(self, interfaces, _generated_certificate))]
pub async fn add<'a, I>(
&self,
pkg_id: &PackageId,
ip: Ipv4Addr,
interfaces: I,
_generated_certificate: GeneratedCertificateMountPoint,
) -> Result<(), Error>
where
I: IntoIterator<Item = (InterfaceId, &'a Interface, TorSecretKeyV3)> + Clone,
for<'b> &'b I: IntoIterator<Item = &'b (InterfaceId, &'a Interface, TorSecretKeyV3)>,
{
let interfaces_tor = interfaces
.clone()
.into_iter()
.filter_map(|i| match i.1.tor_config.clone() {
None => None,
Some(cfg) => Some((i.0, cfg, i.2)),
})
.collect::<Vec<(InterfaceId, TorConfig, TorSecretKeyV3)>>();
let (tor_res, _, nginx_res, _) = tokio::join!(
self.tor.add(pkg_id, ip, interfaces_tor),
{
#[cfg(feature = "avahi")]
let mdns_fut = self.mdns.add(
pkg_id,
interfaces
.clone()
.into_iter()
.map(|(interface_id, _, key)| (interface_id, key)),
);
#[cfg(not(feature = "avahi"))]
let mdns_fut = futures::future::ready(());
mdns_fut
},
{
let interfaces = interfaces
.into_iter()
.filter_map(|(id, interface, tor_key)| match &interface.lan_config {
None => None,
Some(cfg) => Some((
id,
InterfaceMetadata {
dns_base: OnionAddressV3::from(&tor_key.public())
.get_address_without_dot_onion(),
lan_config: cfg.clone(),
protocols: interface.protocols.clone(),
},
)),
});
self.nginx.add(&self.ssl, pkg_id.clone(), ip, interfaces)
},
self.dns.add(pkg_id, ip),
);
tor_res?;
nginx_res?;
Ok(())
}
#[instrument(skip(self, interfaces))]
pub async fn remove<I: IntoIterator<Item = InterfaceId> + Clone>(
&self,
pkg_id: &PackageId,
ip: Ipv4Addr,
interfaces: I,
) -> Result<(), Error> {
let (tor_res, _, nginx_res, _) = tokio::join!(
self.tor.remove(pkg_id, interfaces.clone()),
{
#[cfg(feature = "avahi")]
let mdns_fut = self.mdns.remove(pkg_id, interfaces);
#[cfg(not(feature = "avahi"))]
let mdns_fut = futures::future::ready(());
mdns_fut
},
self.nginx.remove(pkg_id),
self.dns.remove(pkg_id, ip),
);
tor_res?;
nginx_res?;
Ok(())
}
pub async fn generate_certificate_mountpoint<'a, I>(
&self,
pkg_id: &PackageId,
interfaces: &I,
) -> Result<GeneratedCertificateMountPoint, Error>
where
I: IntoIterator<Item = (InterfaceId, &'a Interface, TorSecretKeyV3)> + Clone,
for<'b> &'b I: IntoIterator<Item = &'b (InterfaceId, &'a Interface, TorSecretKeyV3)>,
{
tracing::info!("Generating SSL Certificate mountpoints for {}", pkg_id);
let package_path = PathBuf::from(PACKAGE_CERT_PATH).join(pkg_id);
tokio::fs::create_dir_all(&package_path).await?;
for (id, _, key) in interfaces {
let dns_base = OnionAddressV3::from(&key.public()).get_address_without_dot_onion();
let ssl_path_key = package_path.join(format!("{}.key.pem", id));
let ssl_path_cert = package_path.join(format!("{}.cert.pem", id));
let (key, chain) = self.ssl.certificate_for(&dns_base, pkg_id).await?;
tokio::try_join!(
crate::net::ssl::export_key(&key, &ssl_path_key),
crate::net::ssl::export_cert(&chain, &ssl_path_cert)
)?;
}
Ok(GeneratedCertificateMountPoint(()))
}
pub async fn export_root_ca(&self) -> Result<(PKey<Private>, X509), Error> {
self.ssl.export_root_ca().await
}
}
pub type HttpClient = Client<hyper::client::HttpConnector>;

View File

@@ -0,0 +1,274 @@
use std::net::{Ipv4Addr, SocketAddr};
use std::path::PathBuf;
use std::str::FromStr;
use models::InterfaceId;
use openssl::pkey::{PKey, Private};
use openssl::x509::X509;
use patch_db::DbHandle;
use sqlx::PgPool;
use torut::onion::{OnionAddressV3, TorSecretKeyV3};
use tracing::instrument;
use crate::context::RpcContext;
use crate::hostname::{get_current_ip, get_embassyd_tor_addr, get_hostname, HostNameReceipt};
use crate::net::dns::DnsController;
use crate::net::interface::{Interface, TorConfig};
#[cfg(feature = "avahi")]
use crate::net::mdns::MdnsController;
use crate::net::net_utils::ResourceFqdn;
use crate::net::proxy_controller::ProxyController;
use crate::net::ssl::SslManager;
use crate::net::tor::TorController;
use crate::net::{
GeneratedCertificateMountPoint, HttpHandler, InterfaceMetadata, PACKAGE_CERT_PATH,
};
use crate::s9pk::manifest::PackageId;
use crate::Error;
pub struct NetController {
pub tor: TorController,
#[cfg(feature = "avahi")]
pub mdns: MdnsController,
pub proxy: ProxyController,
pub ssl: SslManager,
pub dns: DnsController,
}
impl NetController {
#[instrument(skip(db, db_handle))]
pub async fn init<Db: DbHandle>(
embassyd_addr: SocketAddr,
embassyd_tor_key: TorSecretKeyV3,
tor_control: SocketAddr,
dns_bind: &[SocketAddr],
db: PgPool,
db_handle: &mut Db,
import_root_ca: Option<(PKey<Private>, X509)>,
) -> Result<Self, Error> {
let receipts = HostNameReceipt::new(db_handle).await?;
let embassy_host_name = get_hostname(db_handle, &receipts).await?;
let embassy_name = embassy_host_name.local_domain_name();
let fqdn_name = ResourceFqdn::from_str(&embassy_name)?;
let ssl = match import_root_ca {
None => SslManager::init(db.clone(), db_handle).await,
Some(a) => SslManager::import_root_ca(db.clone(), a.0, a.1).await,
}?;
Ok(Self {
tor: TorController::init(embassyd_addr, embassyd_tor_key, tor_control).await?,
#[cfg(feature = "avahi")]
mdns: MdnsController::init().await?,
proxy: ProxyController::init(embassyd_addr, fqdn_name, ssl.clone()).await?,
ssl,
dns: DnsController::init(dns_bind).await?,
})
}
pub async fn setup_embassy_ui(rpc_ctx: RpcContext) -> Result<(), Error> {
NetController::setup_embassy_http_ui_handle(rpc_ctx.clone()).await?;
NetController::setup_embassy_https_ui_handle(rpc_ctx.clone()).await?;
Ok(())
}
async fn setup_embassy_https_ui_handle(rpc_ctx: RpcContext) -> Result<(), Error> {
let host_name = rpc_ctx.net_controller.proxy.get_hostname().await;
let host_name_fqdn: ResourceFqdn = host_name.parse()?;
let handler: HttpHandler =
crate::net::static_server::main_ui_server_router(rpc_ctx.clone()).await?;
let eos_pkg_id: PackageId = "embassy".parse().unwrap();
if let ResourceFqdn::Uri {
full_uri: _,
root,
tld: _,
} = host_name_fqdn.clone()
{
let root_cert = rpc_ctx
.net_controller
.ssl
.certificate_for(&root, &eos_pkg_id)
.await?;
rpc_ctx
.net_controller
.proxy
.add_certificate_to_resolver(host_name_fqdn.clone(), root_cert.clone())
.await?;
rpc_ctx
.net_controller
.proxy
.add_handle(443, host_name_fqdn.clone(), handler.clone(), true)
.await?;
};
// serving ip https is not yet supported
Ok(())
}
async fn setup_embassy_http_ui_handle(rpc_ctx: RpcContext) -> Result<(), Error> {
let host_name = rpc_ctx.net_controller.proxy.get_hostname().await;
let ip = get_current_ip(rpc_ctx.ethernet_interface.to_owned()).await?;
let embassy_tor_addr = get_embassyd_tor_addr(rpc_ctx.clone()).await?;
let embassy_tor_fqdn: ResourceFqdn = embassy_tor_addr.parse()?;
let host_name_fqdn: ResourceFqdn = host_name.parse()?;
let ip_fqdn: ResourceFqdn = ip.parse()?;
let handler: HttpHandler =
crate::net::static_server::main_ui_server_router(rpc_ctx.clone()).await?;
rpc_ctx
.net_controller
.proxy
.add_handle(80, embassy_tor_fqdn.clone(), handler.clone(), false)
.await?;
rpc_ctx
.net_controller
.proxy
.add_handle(80, host_name_fqdn.clone(), handler.clone(), false)
.await?;
rpc_ctx
.net_controller
.proxy
.add_handle(80, ip_fqdn.clone(), handler.clone(), false)
.await?;
Ok(())
}
pub fn ssl_directory_for(pkg_id: &PackageId) -> PathBuf {
PathBuf::from(PACKAGE_CERT_PATH).join(pkg_id)
}
#[instrument(skip(self, interfaces, _generated_certificate))]
pub async fn add<'a, I>(
&self,
pkg_id: &PackageId,
ip: Ipv4Addr,
interfaces: I,
_generated_certificate: GeneratedCertificateMountPoint,
) -> Result<(), Error>
where
I: IntoIterator<Item = (InterfaceId, &'a Interface, TorSecretKeyV3)> + Clone,
for<'b> &'b I: IntoIterator<Item = &'b (InterfaceId, &'a Interface, TorSecretKeyV3)>,
{
let interfaces_tor = interfaces
.clone()
.into_iter()
.filter_map(|i| match i.1.tor_config.clone() {
None => None,
Some(cfg) => Some((i.0, cfg, i.2)),
})
.collect::<Vec<(InterfaceId, TorConfig, TorSecretKeyV3)>>();
let (tor_res, _, proxy_res, _) = tokio::join!(
self.tor.add(pkg_id, ip, interfaces_tor),
{
#[cfg(feature = "avahi")]
let mdns_fut = self.mdns.add(
pkg_id,
interfaces
.clone()
.into_iter()
.map(|(interface_id, _, key)| (interface_id, key)),
);
#[cfg(not(feature = "avahi"))]
let mdns_fut = futures::future::ready(());
mdns_fut
},
{
let interfaces =
interfaces
.clone()
.into_iter()
.filter_map(|(id, interface, tor_key)| {
interface.lan_config.as_ref().map(|cfg| {
(
id,
InterfaceMetadata {
fqdn: OnionAddressV3::from(&tor_key.public())
.get_address_without_dot_onion()
+ ".local",
lan_config: cfg.clone(),
protocols: interface.protocols.clone(),
},
)
})
});
self.proxy
.add_docker_service(pkg_id.clone(), ip, interfaces)
},
self.dns.add(pkg_id, ip),
);
tor_res?;
proxy_res?;
Ok(())
}
#[instrument(skip(self, interfaces))]
pub async fn remove<I: IntoIterator<Item = InterfaceId> + Clone>(
&self,
pkg_id: &PackageId,
ip: Ipv4Addr,
interfaces: I,
) -> Result<(), Error> {
let (tor_res, _, proxy_res, _) = tokio::join!(
self.tor.remove(pkg_id, interfaces.clone()),
{
#[cfg(feature = "avahi")]
let mdns_fut = self.mdns.remove(pkg_id, interfaces);
#[cfg(not(feature = "avahi"))]
let mdns_fut = futures::future::ready(());
mdns_fut
},
self.proxy.remove_docker_service(pkg_id),
self.dns.remove(pkg_id, ip),
);
tor_res?;
proxy_res?;
Ok(())
}
pub async fn generate_certificate_mountpoint<'a, I>(
&self,
pkg_id: &PackageId,
interfaces: &I,
) -> Result<GeneratedCertificateMountPoint, Error>
where
I: IntoIterator<Item = (InterfaceId, &'a Interface, TorSecretKeyV3)> + Clone,
for<'b> &'b I: IntoIterator<Item = &'b (InterfaceId, &'a Interface, TorSecretKeyV3)>,
{
tracing::info!("Generating SSL Certificate mountpoints for {}", pkg_id);
let package_path = PathBuf::from(PACKAGE_CERT_PATH).join(pkg_id);
tokio::fs::create_dir_all(&package_path).await?;
for (id, _, key) in interfaces {
let dns_base = OnionAddressV3::from(&key.public()).get_address_without_dot_onion();
let ssl_path_key = package_path.join(format!("{}.key.pem", id));
let ssl_path_cert = package_path.join(format!("{}.cert.pem", id));
let (key, chain) = self.ssl.certificate_for(&dns_base, pkg_id).await?;
tokio::try_join!(
crate::net::ssl::export_key(&key, &ssl_path_key),
crate::net::ssl::export_cert(&chain, &ssl_path_cert)
)?;
}
Ok(GeneratedCertificateMountPoint(()))
}
pub async fn export_root_ca(&self) -> Result<(PKey<Private>, X509), Error> {
self.ssl.export_root_ca().await
}
}

View File

@@ -0,0 +1,122 @@
use std::fmt;
use std::net::IpAddr;
use std::str::FromStr;
use color_eyre::eyre::eyre;
use http::{Request, Uri};
use hyper::Body;
use crate::Error;
pub fn host_addr_fqdn(req: &Request<Body>) -> Result<ResourceFqdn, Error> {
let host = req.headers().get(http::header::HOST);
match host {
Some(host) => {
let host_str = host
.to_str()
.map_err(|e| Error::new(eyre!("{}", e), crate::ErrorKind::AsciiError))?
.to_string();
let host_uri: ResourceFqdn = host_str.parse()?;
Ok(host_uri)
}
None => Err(Error::new(eyre!("No Host"), crate::ErrorKind::NoHost)),
}
}
#[derive(Eq, PartialEq, PartialOrd, Ord, Debug, Clone)]
pub enum ResourceFqdn {
IpAddr(IpAddr),
Uri {
full_uri: String,
root: String,
tld: Tld,
},
}
impl fmt::Display for ResourceFqdn {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
ResourceFqdn::IpAddr(ip) => {
write!(f, "{}", ip)
}
ResourceFqdn::Uri {
full_uri,
root: _,
tld: _,
} => {
write!(f, "{}", full_uri)
}
}
}
}
#[derive(PartialEq, Eq, PartialOrd, Ord, Debug, Clone)]
pub enum Tld {
Local,
Onion,
Embassy,
}
impl fmt::Display for Tld {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Tld::Local => write!(f, ".local"),
Tld::Onion => write!(f, ".onion"),
Tld::Embassy => write!(f, ".embassy"),
}
}
}
impl FromStr for ResourceFqdn {
type Err = Error;
fn from_str(input: &str) -> Result<ResourceFqdn, Self::Err> {
if let Ok(ip) = input.parse::<IpAddr>() {
return Ok(ResourceFqdn::IpAddr(ip));
}
let hostname_split: Vec<&str> = input.split('.').collect();
if hostname_split.len() != 2 {
return Err(Error::new(
eyre!("invalid url tld number: add support for tldextract to parse complex urls like blah.domain.co.uk and etc?"),
crate::ErrorKind::ParseUrl,
));
}
match hostname_split[1] {
"local" => Ok(ResourceFqdn::Uri {
full_uri: input.to_owned(),
root: hostname_split[0].to_owned(),
tld: Tld::Local,
}),
"embassy" => Ok(ResourceFqdn::Uri {
full_uri: input.to_owned(),
root: hostname_split[0].to_owned(),
tld: Tld::Embassy,
}),
"onion" => Ok(ResourceFqdn::Uri {
full_uri: input.to_owned(),
root: hostname_split[0].to_owned(),
tld: Tld::Onion,
}),
_ => Err(Error::new(
eyre!("Unknown TLD for enum"),
crate::ErrorKind::ParseUrl,
)),
}
}
}
impl TryFrom<Uri> for ResourceFqdn {
type Error = Error;
fn try_from(value: Uri) -> Result<Self, Self::Error> {
Self::from_str(&value.to_string())
}
}

View File

@@ -1,19 +0,0 @@
server {{
listen {listen_args};
listen [::]:{listen_args_ipv6};
server_name .{hostname}.local;
{ssl_certificate_line}
{ssl_certificate_key_line}
location / {{
proxy_pass http://{app_ip}:{internal_port}/;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 0;
proxy_request_buffering off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
{proxy_redirect_directive}
}}
}}

View File

@@ -1,254 +0,0 @@
use std::collections::BTreeMap;
use std::net::Ipv4Addr;
use std::path::{Path, PathBuf};
use futures::FutureExt;
use indexmap::IndexSet;
use tokio::sync::Mutex;
use tracing::instrument;
use super::interface::{InterfaceId, LanPortConfig};
use super::ssl::SslManager;
use crate::hostname::Hostname;
use crate::s9pk::manifest::PackageId;
use crate::util::serde::Port;
use crate::util::Invoke;
use crate::{Error, ErrorKind, ResultExt};
pub struct NginxController {
pub nginx_root: PathBuf,
inner: Mutex<NginxControllerInner>,
}
impl NginxController {
pub async fn init(
nginx_root: PathBuf,
ssl_manager: &SslManager,
host_name: &Hostname,
) -> Result<Self, Error> {
Ok(NginxController {
inner: Mutex::new(
NginxControllerInner::init(&nginx_root, ssl_manager, host_name).await?,
),
nginx_root,
})
}
pub async fn add<I: IntoIterator<Item = (InterfaceId, InterfaceMetadata)>>(
&self,
ssl_manager: &SslManager,
package: PackageId,
ipv4: Ipv4Addr,
interfaces: I,
) -> Result<(), Error> {
self.inner
.lock()
.await
.add(&self.nginx_root, ssl_manager, package, ipv4, interfaces)
.await
}
pub async fn remove(&self, package: &PackageId) -> Result<(), Error> {
self.inner
.lock()
.await
.remove(&self.nginx_root, package)
.await
}
}
pub struct NginxControllerInner {
interfaces: BTreeMap<PackageId, PackageNetInfo>,
}
impl NginxControllerInner {
#[instrument]
async fn init(
nginx_root: &Path,
ssl_manager: &SslManager,
host_name: &Hostname,
) -> Result<Self, Error> {
let inner = NginxControllerInner {
interfaces: BTreeMap::new(),
};
// write main ssl key/cert to fs location
let (key, cert) = ssl_manager
.certificate_for(host_name.as_ref(), &"embassy".parse().unwrap())
.await?;
let ssl_path_key = nginx_root.join(format!("ssl/embassy_main.key.pem"));
let ssl_path_cert = nginx_root.join(format!("ssl/embassy_main.cert.pem"));
tokio::try_join!(
crate::net::ssl::export_key(&key, &ssl_path_key),
crate::net::ssl::export_cert(&cert, &ssl_path_cert),
)?;
Ok(inner)
}
#[instrument(skip(self, interfaces))]
async fn add<I: IntoIterator<Item = (InterfaceId, InterfaceMetadata)>>(
&mut self,
nginx_root: &Path,
ssl_manager: &SslManager,
package: PackageId,
ipv4: Ipv4Addr,
interfaces: I,
) -> Result<(), Error> {
let interface_map = interfaces
.into_iter()
.filter(|(_, meta)| {
// don't add nginx stuff for anything we can't connect to over some flavor of http
(meta.protocols.contains("http") || meta.protocols.contains("https"))
// also don't add nginx unless it has at least one exposed port
&& meta.lan_config.len() > 0
})
.collect::<BTreeMap<InterfaceId, InterfaceMetadata>>();
for (id, meta) in interface_map.iter() {
for (port, lan_port_config) in meta.lan_config.iter() {
// get ssl certificate chain
let (
listen_args,
ssl_certificate_line,
ssl_certificate_key_line,
proxy_redirect_directive,
) = if lan_port_config.ssl {
// these have already been written by the net controller
let package_path = nginx_root.join(format!("ssl/{}", package));
if tokio::fs::metadata(&package_path).await.is_err() {
tokio::fs::create_dir_all(&package_path)
.await
.with_ctx(|_| {
(ErrorKind::Filesystem, package_path.display().to_string())
})?;
}
let ssl_path_key = package_path.join(format!("{}.key.pem", id));
let ssl_path_cert = package_path.join(format!("{}.cert.pem", id));
let (key, chain) = ssl_manager
.certificate_for(&meta.dns_base, &package)
.await?;
tokio::try_join!(
crate::net::ssl::export_key(&key, &ssl_path_key),
crate::net::ssl::export_cert(&chain, &ssl_path_cert)
)?;
(
format!("{} ssl", port.0),
format!("ssl_certificate {};", ssl_path_cert.to_str().unwrap()),
format!("ssl_certificate_key {};", ssl_path_key.to_str().unwrap()),
format!("proxy_redirect http://$host/ https://$host/;"),
)
} else {
(
format!("{}", port.0),
String::from(""),
String::from(""),
String::from(""),
)
};
// write nginx configs
let nginx_conf_path = nginx_root.join(format!(
"sites-available/{}_{}_{}.conf",
package, id, port.0
));
tokio::fs::write(
&nginx_conf_path,
format!(
include_str!("nginx.conf.template"),
listen_args = listen_args,
listen_args_ipv6 = listen_args,
hostname = meta.dns_base,
ssl_certificate_line = ssl_certificate_line,
ssl_certificate_key_line = ssl_certificate_key_line,
app_ip = ipv4,
internal_port = lan_port_config.internal,
proxy_redirect_directive = proxy_redirect_directive,
),
)
.await
.with_ctx(|_| (ErrorKind::Filesystem, nginx_conf_path.display().to_string()))?;
let sites_enabled_link_path =
nginx_root.join(format!("sites-enabled/{}_{}_{}.conf", package, id, port.0));
if tokio::fs::metadata(&sites_enabled_link_path).await.is_ok() {
tokio::fs::remove_file(&sites_enabled_link_path).await?;
}
tokio::fs::symlink(&nginx_conf_path, &sites_enabled_link_path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, nginx_conf_path.display().to_string()))?;
}
}
match self.interfaces.get_mut(&package) {
None => {
let info = PackageNetInfo {
interfaces: interface_map,
};
self.interfaces.insert(package, info);
}
Some(p) => {
p.interfaces.extend(interface_map);
}
};
self.hup().await?;
Ok(())
}
#[instrument(skip(self))]
async fn remove(&mut self, nginx_root: &Path, package: &PackageId) -> Result<(), Error> {
let removed = self.interfaces.remove(package);
if let Some(net_info) = removed {
for (id, meta) in net_info.interfaces {
for (port, _lan_port_config) in meta.lan_config.iter() {
// remove ssl certificates and nginx configs
let package_path = nginx_root.join(format!("ssl/{}", package));
let enabled_path = nginx_root
.join(format!("sites-enabled/{}_{}_{}.conf", package, id, port.0));
let available_path = nginx_root.join(format!(
"sites-available/{}_{}_{}.conf",
package, id, port.0
));
let _ = tokio::try_join!(
async {
if tokio::fs::metadata(&package_path).await.is_ok() {
tokio::fs::remove_dir_all(&package_path)
.map(|res| {
res.with_ctx(|_| {
(
ErrorKind::Filesystem,
package_path.display().to_string(),
)
})
})
.await?;
Ok(())
} else {
Ok(())
}
},
tokio::fs::remove_file(&enabled_path).map(|res| res.with_ctx(|_| (
ErrorKind::Filesystem,
enabled_path.display().to_string()
))),
tokio::fs::remove_file(&available_path).map(|res| res.with_ctx(|_| (
ErrorKind::Filesystem,
available_path.display().to_string()
))),
)?;
}
}
}
self.hup().await?;
Ok(())
}
#[instrument(skip(self))]
async fn hup(&self) -> Result<(), Error> {
let _ = tokio::process::Command::new("systemctl")
.arg("reload")
.arg("nginx")
.invoke(ErrorKind::Nginx)
.await?;
Ok(())
}
}
struct PackageNetInfo {
interfaces: BTreeMap<InterfaceId, InterfaceMetadata>,
}
pub struct InterfaceMetadata {
pub dns_base: String,
pub lan_config: BTreeMap<Port, LanPortConfig>,
pub protocols: IndexSet<String>,
}

View File

@@ -0,0 +1,384 @@
use std::collections::BTreeMap;
use std::net::{Ipv4Addr, SocketAddr};
use std::str::FromStr;
use std::sync::Arc;
use color_eyre::eyre::eyre;
use futures::FutureExt;
use http::{Method, Request, Response};
use hyper::upgrade::Upgraded;
use hyper::{Body, Error as HyperError};
use models::{InterfaceId, PackageId};
use openssl::pkey::{PKey, Private};
use openssl::x509::X509;
use tokio::net::TcpStream;
use tokio::sync::Mutex;
use tracing::{error, info, instrument};
use crate::net::net_utils::{host_addr_fqdn, ResourceFqdn};
use crate::net::ssl::SslManager;
use crate::net::vhost_controller::VHOSTController;
use crate::net::{HttpClient, HttpHandler, InterfaceMetadata, PackageNetInfo};
use crate::{Error, ResultExt};
pub struct ProxyController {
inner: Mutex<ProxyControllerInner>,
}
impl ProxyController {
pub async fn init(
embassyd_socket_addr: SocketAddr,
embassy_fqdn: ResourceFqdn,
ssl_manager: SslManager,
) -> Result<Self, Error> {
Ok(ProxyController {
inner: Mutex::new(
ProxyControllerInner::init(embassyd_socket_addr, embassy_fqdn, ssl_manager).await?,
),
})
}
pub async fn add_docker_service<I: IntoIterator<Item = (InterfaceId, InterfaceMetadata)>>(
&self,
package: PackageId,
ipv4: Ipv4Addr,
interfaces: I,
) -> Result<(), Error> {
self.inner
.lock()
.await
.add_docker_service(package, ipv4, interfaces)
.await
}
pub async fn remove_docker_service(&self, package: &PackageId) -> Result<(), Error> {
self.inner.lock().await.remove_docker_service(package).await
}
pub async fn add_certificate_to_resolver(
&self,
fqdn: ResourceFqdn,
cert_data: (PKey<Private>, Vec<X509>),
) -> Result<(), Error> {
self.inner
.lock()
.await
.add_certificate_to_resolver(fqdn, cert_data)
.await
}
pub async fn add_handle(
&self,
ext_port: u16,
fqdn: ResourceFqdn,
handler: HttpHandler,
is_ssl: bool,
) -> Result<(), Error> {
self.inner
.lock()
.await
.add_handle(ext_port, fqdn, handler, is_ssl)
.await
}
pub async fn get_hostname(&self) -> String {
self.inner.lock().await.get_embassy_hostname()
}
pub async fn proxy(
client: HttpClient,
req: Request<Body>,
) -> Result<Response<Body>, HyperError> {
if Method::CONNECT == req.method() {
// Received an HTTP request like:
// ```
// CONNECT www.domain.com:443 HTTP/1.1s
// Host: www.domain.com:443
// Proxy-Connection: Keep-Alive
// ```
//
// When HTTP method is CONNECT we should return an empty body
// then we can eventually upgrade the connection and talk a new protocol.
//
// Note: only after client received an empty body with STATUS_OK can the
// connection be upgraded, so we can't return a response inside
// `on_upgrade` future.
match host_addr_fqdn(&req) {
Ok(host) => {
tokio::task::spawn(async move {
match hyper::upgrade::on(req).await {
Ok(upgraded) => match host {
ResourceFqdn::IpAddr(ip) => {
if let Err(e) = Self::tunnel(upgraded, ip.to_string()).await {
error!("server io error: {}", e);
};
}
ResourceFqdn::Uri {
full_uri,
root: _,
tld: _,
} => {
if let Err(e) =
Self::tunnel(upgraded, full_uri.to_string()).await
{
error!("server io error: {}", e);
};
}
},
Err(e) => error!("upgrade error: {}", e),
}
});
Ok(Response::new(Body::empty()))
}
Err(e) => {
let err_txt = format!("CONNECT host is not socket addr: {:?}", &req.uri());
let mut resp = Response::new(Body::from(format!(
"CONNECT must be to a socket address: {}: {}",
err_txt, e
)));
*resp.status_mut() = http::StatusCode::BAD_REQUEST;
Ok(resp)
}
}
} else {
client.request(req).await
}
}
// Create a TCP connection to host:port, build a tunnel between the connection and
// the upgraded connection
async fn tunnel(mut upgraded: Upgraded, addr: String) -> std::io::Result<()> {
let mut server = TcpStream::connect(addr).await?;
let (from_client, from_server) =
tokio::io::copy_bidirectional(&mut upgraded, &mut server).await?;
info!(
"client wrote {} bytes and received {} bytes",
from_client, from_server
);
Ok(())
}
}
struct ProxyControllerInner {
ssl_manager: SslManager,
vhosts: VHOSTController,
embassyd_fqdn: ResourceFqdn,
docker_interfaces: BTreeMap<PackageId, PackageNetInfo>,
docker_iface_lookups: BTreeMap<(PackageId, InterfaceId), ResourceFqdn>,
}
impl ProxyControllerInner {
#[instrument]
async fn init(
embassyd_socket_addr: SocketAddr,
embassyd_fqdn: ResourceFqdn,
ssl_manager: SslManager,
) -> Result<Self, Error> {
let inner = ProxyControllerInner {
vhosts: VHOSTController::init(embassyd_socket_addr),
ssl_manager,
embassyd_fqdn,
docker_interfaces: BTreeMap::new(),
docker_iface_lookups: BTreeMap::new(),
};
Ok(inner)
}
async fn add_certificate_to_resolver(
&mut self,
hostname: ResourceFqdn,
cert_data: (PKey<Private>, Vec<X509>),
) -> Result<(), Error> {
self.vhosts
.cert_resolver
.add_certificate_to_resolver(hostname, cert_data)
.await
.map_err(|err| {
Error::new(
eyre!("Unable to add ssl cert to the resolver: {}", err),
crate::ErrorKind::Network,
)
})?;
Ok(())
}
async fn add_package_certificate_to_resolver(
&mut self,
resource_fqdn: ResourceFqdn,
pkg_id: PackageId,
) -> Result<(), Error> {
let package_cert = match resource_fqdn.clone() {
ResourceFqdn::IpAddr(ip) => {
self.ssl_manager
.certificate_for(&ip.to_string(), &pkg_id)
.await?
}
ResourceFqdn::Uri {
full_uri: _,
root,
tld: _,
} => self.ssl_manager.certificate_for(&root, &pkg_id).await?,
};
self.vhosts
.cert_resolver
.add_certificate_to_resolver(resource_fqdn, package_cert)
.await
.map_err(|err| {
Error::new(
eyre!("Unable to add ssl cert to the resolver: {}", err),
crate::ErrorKind::Network,
)
})?;
Ok(())
}
pub async fn add_handle(
&mut self,
external_svc_port: u16,
fqdn: ResourceFqdn,
svc_handler: HttpHandler,
is_ssl: bool,
) -> Result<(), Error> {
self.vhosts
.add_server_or_handle(external_svc_port, fqdn, svc_handler, is_ssl)
.await
}
#[instrument(skip(self, interfaces))]
pub async fn add_docker_service<I: IntoIterator<Item = (InterfaceId, InterfaceMetadata)>>(
&mut self,
package: PackageId,
docker_ipv4: Ipv4Addr,
interfaces: I,
) -> Result<(), Error> {
let mut interface_map = interfaces
.into_iter()
.filter(|(_, meta)| {
// don't add stuff for anything we can't connect to over some flavor of http
(meta.protocols.contains("http") || meta.protocols.contains("https"))
// also don't add anything unless it has at least one exposed port
&& !meta.lan_config.is_empty()
})
.collect::<BTreeMap<InterfaceId, InterfaceMetadata>>();
for (id, meta) in interface_map.iter() {
for (external_svc_port, lan_port_config) in meta.lan_config.iter() {
let full_fqdn = ResourceFqdn::from_str(&meta.fqdn).unwrap();
self.docker_iface_lookups
.insert((package.clone(), id.clone()), full_fqdn.clone());
self.add_package_certificate_to_resolver(full_fqdn.clone(), package.clone())
.await?;
let svc_handler =
Self::create_docker_handle(docker_ipv4.to_string(), lan_port_config.internal)
.await;
self.add_handle(
external_svc_port.0,
full_fqdn.clone(),
svc_handler,
lan_port_config.ssl,
)
.await?;
}
}
let docker_interface = self.docker_interfaces.entry(package.clone()).or_default();
docker_interface.interfaces.append(&mut interface_map);
Ok(())
}
async fn create_docker_handle(internal_ip: String, port: u16) -> HttpHandler {
let svc_handler: HttpHandler = Arc::new(move |mut req| {
let proxy_addr = internal_ip.clone();
async move {
let client = HttpClient::new();
let uri_string = format!(
"http://{}:{}{}",
proxy_addr,
port,
req.uri()
.path_and_query()
.map(|x| x.as_str())
.unwrap_or("/")
);
let uri = uri_string.parse().unwrap();
*req.uri_mut() = uri;
ProxyController::proxy(client, req).await
}
.boxed()
});
svc_handler
}
#[instrument(skip(self))]
pub async fn remove_docker_service(&mut self, package: &PackageId) -> Result<(), Error> {
let mut server_removals: Vec<(u16, InterfaceId)> = Default::default();
let net_info = match self.docker_interfaces.get(package) {
Some(a) => a,
None => return Ok(()),
};
for (id, meta) in &net_info.interfaces {
for (service_ext_port, _lan_port_config) in meta.lan_config.iter() {
if let Some(server) = self.vhosts.service_servers.get_mut(&service_ext_port.0) {
if let Some(fqdn) = self
.docker_iface_lookups
.get(&(package.clone(), id.clone()))
{
server.remove_svc_handler_mapping(fqdn.to_owned()).await?;
self.vhosts
.cert_resolver
.remove_cert(fqdn.to_owned())
.await?;
let mapping = server.svc_mapping.read().await;
if mapping.is_empty() {
server_removals.push((service_ext_port.0, id.to_owned()));
}
}
}
}
}
for (port, interface_id) in server_removals {
if let Some(removed_server) = self.vhosts.service_servers.remove(&port) {
removed_server.shutdown.send(()).map_err(|_| {
Error::new(
eyre!("Hyper server did not quit properly"),
crate::ErrorKind::JoinError,
)
})?;
removed_server
.handle
.await
.with_kind(crate::ErrorKind::JoinError)?;
self.docker_interfaces.remove(&package.clone());
self.docker_iface_lookups
.remove(&(package.clone(), interface_id));
}
}
Ok(())
}
pub fn get_embassy_hostname(&self) -> String {
self.embassyd_fqdn.to_string()
}
}

View File

@@ -24,7 +24,7 @@ use crate::{Error, ErrorKind, ResultExt};
static CERTIFICATE_VERSION: i32 = 2; // X509 version 3 is actually encoded as '2' in the cert because fuck you.
pub const ROOT_CA_STATIC_PATH: &str = "/var/lib/embassy/ssl/root-ca.crt";
#[derive(Debug)]
#[derive(Debug, Clone)]
pub struct SslManager {
store: SslStore,
root_cert: X509,
@@ -32,7 +32,7 @@ pub struct SslManager {
int_cert: X509,
}
#[derive(Debug)]
#[derive(Debug, Clone)]
struct SslStore {
secret_store: PgPool,
}
@@ -177,7 +177,7 @@ impl SslManager {
}
Some((key, cert)) => Ok((key, cert)),
}?;
// generate static file for download, this will get blown up on embassy restart so it's good to write it on
// generate static file for download, this will gte blown up on embassy restart so it's good to write it on
// every ssl manager init
tokio::fs::create_dir_all(
Path::new(ROOT_CA_STATIC_PATH)
@@ -513,57 +513,3 @@ fn make_leaf_cert(
let cert = builder.build();
Ok(cert)
}
// #[tokio::test]
// async fn ca_details_persist() -> Result<(), Error> {
// let pool = sqlx::Pool::<sqlx::Postgres>::connect("postgres::memory:").await?;
// sqlx::migrate!()
// .run(&pool)
// .await
// .with_kind(crate::ErrorKind::Database)?;
// let mgr = SslManager::init(pool.clone()).await?;
// let root_cert0 = mgr.root_cert;
// let int_key0 = mgr.int_key;
// let int_cert0 = mgr.int_cert;
// let mgr = SslManager::init(pool).await?;
// let root_cert1 = mgr.root_cert;
// let int_key1 = mgr.int_key;
// let int_cert1 = mgr.int_cert;
//
// assert_eq!(root_cert0.to_pem()?, root_cert1.to_pem()?);
// assert_eq!(
// int_key0.private_key_to_pem_pkcs8()?,
// int_key1.private_key_to_pem_pkcs8()?
// );
// assert_eq!(int_cert0.to_pem()?, int_cert1.to_pem()?);
// Ok(())
// }
//
// #[tokio::test]
// async fn certificate_details_persist() -> Result<(), Error> {
// let pool = sqlx::Pool::<sqlx::Postgres>::connect("postgres::memory:").await?;
// sqlx::migrate!()
// .run(&pool)
// .await
// .with_kind(crate::ErrorKind::Database)?;
// let mgr = SslManager::init(pool.clone()).await?;
// let package_id = "bitcoind".parse().unwrap();
// let (key0, cert_chain0) = mgr.certificate_for("start9", &package_id).await?;
// let (key1, cert_chain1) = mgr.certificate_for("start9", &package_id).await?;
//
// assert_eq!(
// key0.private_key_to_pem_pkcs8()?,
// key1.private_key_to_pem_pkcs8()?
// );
// assert_eq!(
// cert_chain0
// .iter()
// .map(|cert| cert.to_pem().unwrap())
// .collect::<Vec<Vec<u8>>>(),
// cert_chain1
// .iter()
// .map(|cert| cert.to_pem().unwrap())
// .collect::<Vec<Vec<u8>>>()
// );
// Ok(())
// }

View File

@@ -0,0 +1,458 @@
use std::fs::Metadata;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::UNIX_EPOCH;
use color_eyre::eyre::eyre;
use digest::Digest;
use futures::FutureExt;
use http::response::Builder;
use hyper::{Body, Method, Request, Response, StatusCode};
use rpc_toolkit::rpc_handler;
use tokio::fs::File;
use tokio_util::codec::{BytesCodec, FramedRead};
use crate::context::{DiagnosticContext, InstallContext, RpcContext, SetupContext};
use crate::core::rpc_continuations::RequestGuid;
use crate::db::subscribe;
use crate::install::PKG_PUBLIC_DIR;
use crate::middleware::auth::HasValidSession;
use crate::net::HttpHandler;
use crate::{diagnostic_api, install_api, main_api, setup_api, Error, ErrorKind, ResultExt};
static NOT_FOUND: &[u8] = b"Not Found";
static NOT_AUTHORIZED: &[u8] = b"Not Authorized";
pub const MAIN_UI_WWW_DIR: &str = "/var/www/html/main";
pub const SETUP_UI_WWW_DIR: &str = "/var/www/html/setup";
pub const DIAG_UI_WWW_DIR: &str = "/var/www/html/diagnostic";
pub const INSTALL_UI_WWW_DIR: &str = "/var/www/html/install";
fn status_fn(_: i32) -> StatusCode {
StatusCode::OK
}
#[derive(Clone)]
pub enum UiMode {
Setup,
Diag,
Install,
Main,
}
pub async fn setup_ui_file_router(ctx: SetupContext) -> Result<HttpHandler, Error> {
let handler: HttpHandler = Arc::new(move |req| {
let ctx = ctx.clone();
let ui_mode = UiMode::Setup;
async move {
let res = match req.uri().path() {
path if path.starts_with("/rpc/") => {
let rpc_handler =
rpc_handler!({command: setup_api, context: ctx, status: status_fn});
rpc_handler(req)
.await
.map_err(|err| Error::new(eyre!("{}", err), crate::ErrorKind::Network))
}
_ => alt_ui(req, ui_mode).await,
};
match res {
Ok(data) => Ok(data),
Err(err) => Ok(server_error(err)),
}
}
.boxed()
});
Ok(handler)
}
pub async fn diag_ui_file_router(ctx: DiagnosticContext) -> Result<HttpHandler, Error> {
let handler: HttpHandler = Arc::new(move |req| {
let ctx = ctx.clone();
let ui_mode = UiMode::Diag;
async move {
let res = match req.uri().path() {
path if path.starts_with("/rpc/") => {
let rpc_handler =
rpc_handler!({command: diagnostic_api, context: ctx, status: status_fn});
rpc_handler(req)
.await
.map_err(|err| Error::new(eyre!("{}", err), crate::ErrorKind::Network))
}
_ => alt_ui(req, ui_mode).await,
};
match res {
Ok(data) => Ok(data),
Err(err) => Ok(server_error(err)),
}
}
.boxed()
});
Ok(handler)
}
pub async fn install_ui_file_router(ctx: InstallContext) -> Result<HttpHandler, Error> {
let handler: HttpHandler = Arc::new(move |req| {
let ctx = ctx.clone();
let ui_mode = UiMode::Install;
async move {
let res = match req.uri().path() {
path if path.starts_with("/rpc/") => {
let rpc_handler =
rpc_handler!({command: install_api, context: ctx, status: status_fn});
rpc_handler(req)
.await
.map_err(|err| Error::new(eyre!("{}", err), crate::ErrorKind::Network))
}
_ => alt_ui(req, ui_mode).await,
};
match res {
Ok(data) => Ok(data),
Err(err) => Ok(server_error(err)),
}
}
.boxed()
});
Ok(handler)
}
pub async fn main_ui_server_router(ctx: RpcContext) -> Result<HttpHandler, Error> {
let handler: HttpHandler = Arc::new(move |req| {
let ctx = ctx.clone();
async move {
let res = match req.uri().path() {
path if path.starts_with("/rpc/") => {
let rpc_handler =
rpc_handler!({command: main_api, context: ctx, status: status_fn});
rpc_handler(req)
.await
.map_err(|err| Error::new(eyre!("{}", err), crate::ErrorKind::Network))
}
"/ws/db" => subscribe(ctx, req).await,
path if path.starts_with("/ws/rpc/") => {
match RequestGuid::from(path.strip_prefix("/ws/rpc/").unwrap()) {
None => {
tracing::debug!("No Guid Path");
Ok::<_, Error>(bad_request())
}
Some(guid) => match ctx.get_ws_continuation_handler(&guid).await {
Some(cont) => match cont(req).await {
Ok::<_, Error>(r) => Ok::<_, Error>(r),
Err(err) => Ok::<_, Error>(server_error(err)),
},
_ => Ok::<_, Error>(not_found()),
},
}
}
path if path.starts_with("/rest/rpc/") => {
match RequestGuid::from(path.strip_prefix("/rest/rpc/").unwrap()) {
None => {
tracing::debug!("No Guid Path");
Ok::<_, Error>(bad_request())
}
Some(guid) => match ctx.get_rest_continuation_handler(&guid).await {
None => Ok::<_, Error>(not_found()),
Some(cont) => match cont(req).await {
Ok::<_, Error>(r) => Ok::<_, Error>(r),
Err(e) => Ok::<_, Error>(server_error(e)),
},
},
}
}
_ => main_embassy_ui(req, ctx).await,
};
match res {
Ok(data) => Ok(data),
Err(err) => Ok(server_error(err)),
}
}
.boxed()
});
Ok(handler)
}
async fn alt_ui(req: Request<Body>, ui_mode: UiMode) -> Result<Response<Body>, Error> {
let selected_root_dir = match ui_mode {
UiMode::Setup => SETUP_UI_WWW_DIR,
UiMode::Diag => DIAG_UI_WWW_DIR,
UiMode::Install => INSTALL_UI_WWW_DIR,
UiMode::Main => MAIN_UI_WWW_DIR,
};
let (request_parts, _body) = req.into_parts();
match request_parts.uri.path() {
"/" => {
let full_path = PathBuf::from(selected_root_dir).join("index.html");
file_send(full_path).await
}
_ => {
match (
request_parts.method,
request_parts
.uri
.path()
.strip_prefix('/')
.unwrap_or(request_parts.uri.path())
.split_once('/'),
) {
(Method::GET, None) => {
let uri_path = request_parts
.uri
.path()
.strip_prefix('/')
.unwrap_or(request_parts.uri.path());
let full_path = PathBuf::from(selected_root_dir).join(uri_path);
file_send(full_path).await
}
(Method::GET, Some((dir, file))) => {
let full_path = PathBuf::from(selected_root_dir).join(dir).join(file);
file_send(full_path).await
}
_ => Ok(not_found()),
}
}
}
}
async fn main_embassy_ui(req: Request<Body>, ctx: RpcContext) -> Result<Response<Body>, Error> {
let selected_root_dir = MAIN_UI_WWW_DIR;
let (request_parts, _body) = req.into_parts();
match request_parts.uri.path() {
"/" => {
let full_path = PathBuf::from(selected_root_dir).join("index.html");
file_send(full_path).await
}
_ => {
let valid_session = HasValidSession::from_request_parts(&request_parts, &ctx).await;
match valid_session {
Ok(_valid) => {
match (
request_parts.method,
request_parts
.uri
.path()
.strip_prefix('/')
.unwrap_or(request_parts.uri.path())
.split_once('/'),
) {
(Method::GET, Some(("public", path))) => {
let sub_path = Path::new(path);
if let Ok(rest) = sub_path.strip_prefix("package-data") {
file_send(ctx.datadir.join(PKG_PUBLIC_DIR).join(rest)).await
} else if let Ok(rest) = sub_path.strip_prefix("eos") {
match rest.to_str() {
Some("local.crt") => {
file_send(crate::net::ssl::ROOT_CA_STATIC_PATH).await
}
None => Ok(bad_request()),
_ => Ok(not_found()),
}
} else {
Ok(not_found())
}
}
(Method::GET, Some(("eos", "local.crt"))) => {
file_send(PathBuf::from(crate::net::ssl::ROOT_CA_STATIC_PATH)).await
}
(Method::GET, None) => {
let uri_path = request_parts
.uri
.path()
.strip_prefix('/')
.unwrap_or(request_parts.uri.path());
let full_path = PathBuf::from(selected_root_dir).join(uri_path);
file_send(full_path).await
}
(Method::GET, Some((dir, file))) => {
let full_path = PathBuf::from(selected_root_dir).join(dir).join(file);
file_send(full_path).await
}
_ => Ok(not_found()),
}
}
Err(err) => {
match (
request_parts.method,
request_parts
.uri
.path()
.strip_prefix('/')
.unwrap_or(request_parts.uri.path())
.split_once('/'),
) {
(Method::GET, Some(("public", _path))) => {
un_authorized(err, request_parts.uri.path())
}
(Method::GET, Some(("eos", "local.crt"))) => {
un_authorized(err, request_parts.uri.path())
}
(Method::GET, None) => {
let uri_path = request_parts
.uri
.path()
.strip_prefix('/')
.unwrap_or(request_parts.uri.path());
let full_path = PathBuf::from(selected_root_dir).join(uri_path);
file_send(full_path).await
}
(Method::GET, Some((dir, file))) => {
let full_path = PathBuf::from(selected_root_dir).join(dir).join(file);
file_send(full_path).await
}
_ => Ok(not_found()),
}
}
}
}
}
}
fn un_authorized(err: Error, path: &str) -> Result<Response<Body>, Error> {
tracing::warn!("unauthorized for {} @{:?}", err, path);
tracing::debug!("{:?}", err);
Ok(Response::builder()
.status(StatusCode::UNAUTHORIZED)
.body(NOT_AUTHORIZED.into())
.unwrap())
}
/// HTTP status code 404
fn not_found() -> Response<Body> {
Response::builder()
.status(StatusCode::NOT_FOUND)
.body(NOT_FOUND.into())
.unwrap()
}
fn server_error(err: Error) -> Response<Body> {
Response::builder()
.status(StatusCode::INTERNAL_SERVER_ERROR)
.body(err.to_string().into())
.unwrap()
}
fn bad_request() -> Response<Body> {
Response::builder()
.status(StatusCode::BAD_REQUEST)
.body(Body::empty())
.unwrap()
}
async fn file_send(path: impl AsRef<Path>) -> Result<Response<Body>, Error> {
// Serve a file by asynchronously reading it by chunks using tokio-util crate.
let path = path.as_ref();
if let Ok(file) = File::open(path).await {
let metadata = file.metadata().await.with_kind(ErrorKind::Filesystem)?;
match IsNonEmptyFile::new(&metadata, path) {
Some(a) => a,
None => return Ok(not_found()),
};
let mut builder = Response::builder().status(StatusCode::OK);
builder = with_e_tag(path, &metadata, builder)?;
builder = with_content_type(path, builder);
builder = with_content_length(&metadata, builder);
let stream = FramedRead::new(file, BytesCodec::new());
let body = Body::wrap_stream(stream);
return builder.body(body).with_kind(ErrorKind::Network);
}
tracing::debug!("File not found: {:?}", path);
Ok(not_found())
}
struct IsNonEmptyFile(());
impl IsNonEmptyFile {
fn new(metadata: &Metadata, path: &Path) -> Option<Self> {
let length = metadata.len();
if !metadata.is_file() || length == 0 {
tracing::debug!("File is empty: {:?}", path);
return None;
}
Some(Self(()))
}
}
fn with_e_tag(path: &Path, metadata: &Metadata, builder: Builder) -> Result<Builder, Error> {
let modified = metadata.modified().with_kind(ErrorKind::Filesystem)?;
let mut hasher = sha2::Sha256::new();
hasher.update(format!("{:?}", path).as_bytes());
hasher.update(
format!(
"{}",
modified
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs()
)
.as_bytes(),
);
let res = hasher.finalize();
Ok(builder.header(
"ETag",
base32::encode(base32::Alphabet::RFC4648 { padding: false }, res.as_slice()).to_lowercase(),
))
}
///https://en.wikipedia.org/wiki/Media_type
fn with_content_type(path: &Path, builder: Builder) -> Builder {
let content_type = match path.extension() {
Some(os_str) => match os_str.to_str() {
Some("apng") => "image/apng",
Some("avif") => "image/avif",
Some("flif") => "image/flif",
Some("gif") => "image/gif",
Some("jpg") | Some("jpeg") | Some("jfif") | Some("pjpeg") | Some("pjp") => "image/jpeg",
Some("jxl") => "image/jxl",
Some("png") => "image/png",
Some("svg") => "image/svg+xml",
Some("webp") => "image/webp",
Some("mng") | Some("x-mng") => "image/x-mng",
Some("css") => "text/css",
Some("csv") => "text/csv",
Some("html") => "text/html",
Some("php") => "text/php",
Some("plain") | Some("md") | Some("txt") => "text/plain",
Some("xml") => "text/xml",
Some("js") => "text/javascript",
Some("wasm") => "application/wasm",
None | Some(_) => "text/plain",
},
None => "text/plain",
};
builder.header("Content-Type", content_type)
}
fn with_content_length(metadata: &Metadata, builder: Builder) -> Builder {
builder.header(http::header::CONTENT_LENGTH, metadata.len())
}

View File

@@ -0,0 +1,82 @@
use std::collections::BTreeMap;
use std::net::SocketAddr;
use std::sync::Arc;
use tokio_rustls::rustls::ServerConfig;
use crate::net::cert_resolver::EmbassyCertResolver;
use crate::net::embassy_service_http_server::{EmbassyServiceHTTPServer};
use crate::net::HttpHandler;
use crate::Error;
use crate::net::net_utils::ResourceFqdn;
pub struct VHOSTController {
pub service_servers: BTreeMap<u16, EmbassyServiceHTTPServer>,
pub cert_resolver: EmbassyCertResolver,
embassyd_addr: SocketAddr,
}
impl VHOSTController {
pub fn init(embassyd_addr: SocketAddr) -> Self {
Self {
embassyd_addr,
service_servers: BTreeMap::new(),
cert_resolver: EmbassyCertResolver::new(),
}
}
pub fn build_ssl_svr_cfg(&self) -> Result<Arc<ServerConfig>, Error> {
let ssl_cfg = ServerConfig::builder()
.with_safe_default_cipher_suites()
.with_safe_default_kx_groups()
.with_safe_default_protocol_versions()
.unwrap()
.with_no_client_auth()
.with_cert_resolver(Arc::new(self.cert_resolver.clone()));
Ok(Arc::new(ssl_cfg))
}
pub async fn add_server_or_handle(
&mut self,
external_svc_port: u16,
fqdn: ResourceFqdn,
svc_handler: HttpHandler,
is_ssl: bool,
) -> Result<(), Error> {
if let Some(server) = self.service_servers.get_mut(&external_svc_port) {
server.add_svc_handler_mapping(fqdn, svc_handler).await?;
} else {
self.add_server(is_ssl, external_svc_port, fqdn, svc_handler)
.await?;
}
Ok(())
}
async fn add_server(
&mut self,
is_ssl: bool,
external_svc_port: u16,
fqdn: ResourceFqdn,
svc_handler: HttpHandler,
) -> Result<(), Error> {
let ssl_cfg = if is_ssl {
Some(self.build_ssl_svr_cfg()?)
} else {
None
};
let mut new_service_server =
EmbassyServiceHTTPServer::new(self.embassyd_addr.ip(), external_svc_port, ssl_cfg)
.await?;
new_service_server
.add_svc_handler_mapping(fqdn.clone(), svc_handler)
.await?;
self.service_servers
.insert(external_svc_port, new_service_server);
Ok(())
}
}

View File

@@ -0,0 +1,94 @@
use crate::context::RpcContext;
pub async fn ws_server_handle(rpc_ctx: RpcContext) {
let ws_ctx = rpc_ctx.clone();
let ws_server_handle = {
let builder = Server::bind(&ws_ctx.bind_ws);
let make_svc = ::rpc_toolkit::hyper::service::make_service_fn(move |_| {
let ctx = ws_ctx.clone();
async move {
Ok::<_, ::rpc_toolkit::hyper::Error>(::rpc_toolkit::hyper::service::service_fn(
move |req| {
let ctx = ctx.clone();
async move {
tracing::debug!("Request to {}", req.uri().path());
match req.uri().path() {
"/ws/db" => {
Ok(subscribe(ctx, req).await.unwrap_or_else(err_to_500))
}
path if path.starts_with("/ws/rpc/") => {
match RequestGuid::from(
path.strip_prefix("/ws/rpc/").unwrap(),
) {
None => {
tracing::debug!("No Guid Path");
Response::builder()
.status(StatusCode::BAD_REQUEST)
.body(Body::empty())
}
Some(guid) => {
match ctx.get_ws_continuation_handler(&guid).await {
Some(cont) => match cont(req).await {
Ok(r) => Ok(r),
Err(e) => Response::builder()
.status(
StatusCode::INTERNAL_SERVER_ERROR,
)
.body(Body::from(format!("{}", e))),
},
_ => Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::empty()),
}
}
}
}
path if path.starts_with("/rest/rpc/") => {
match RequestGuid::from(
path.strip_prefix("/rest/rpc/").unwrap(),
) {
None => {
tracing::debug!("No Guid Path");
Response::builder()
.status(StatusCode::BAD_REQUEST)
.body(Body::empty())
}
Some(guid) => {
match ctx.get_rest_continuation_handler(&guid).await
{
None => Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::empty()),
Some(cont) => match cont(req).await {
Ok(r) => Ok(r),
Err(e) => Response::builder()
.status(
StatusCode::INTERNAL_SERVER_ERROR,
)
.body(Body::from(format!("{}", e))),
},
}
}
}
}
_ => Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::empty()),
}
}
},
))
}
});
builder.serve(make_svc)
}
.with_graceful_shutdown({
let mut shutdown = rpc_ctx.shutdown.subscribe();
async move {
shutdown.recv().await.expect("context dropped");
}
});
}