Feature/lxc container runtime (#2514)

* wip: static-server errors

* wip: fix wifi

* wip: Fix the service_effects

* wip: Fix cors in the middleware

* wip(chore): Auth clean up the lint.

* wip(fix): Vhost

* wip: continue manager refactor

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* wip: service manager refactor

* wip: Some fixes

* wip(fix): Fix the lib.rs

* wip

* wip(fix): Logs

* wip: bins

* wip(innspect): Add in the inspect

* wip: config

* wip(fix): Diagnostic

* wip(fix): Dependencies

* wip: context

* wip(fix) Sorta auth

* wip: warnings

* wip(fix): registry/admin

* wip(fix) marketplace

* wip(fix) Some more converted and fixed with the linter and config

* wip: Working on the static server

* wip(fix)static server

* wip: Remove some asynnc

* wip: Something about the request and regular rpc

* wip: gut install

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* wip: Convert the static server into the new system

* wip delete file

* test

* wip(fix) vhost does not need the with safe defaults

* wip: Adding in the wifi

* wip: Fix the developer and the verify

* wip: new install flow

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* fix middleware

* wip

* wip: Fix the auth

* wip

* continue service refactor

* feature: Service get_config

* feat: Action

* wip: Fighting the great fight against the borrow checker

* wip: Remove an error in a file that I just need to deel with later

* chore: Add in some more lifetime stuff to the services

* wip: Install fix on lifetime

* cleanup

* wip: Deal with the borrow later

* more cleanup

* resolve borrowchecker errors

* wip(feat): add in the handler for the socket, for now

* wip(feat): Update the service_effect_handler::action

* chore: Add in the changes to make sure the from_service goes to context

* chore: Change the

* refactor service map

* fix references to service map

* fill out restore

* wip: Before I work on the store stuff

* fix backup module

* handle some warnings

* feat: add in the ui components on the rust side

* feature: Update the procedures

* chore: Update the js side of the main and a few of the others

* chore: Update the rpc listener to match the persistant container

* wip: Working on updating some things to have a better name

* wip(feat): Try and get the rpc to return the correct shape?

* lxc wip

* wip(feat): Try and get the rpc to return the correct shape?

* build for container runtime wip

* remove container-init

* fix build

* fix error

* chore: Update to work I suppose

* lxc wip

* remove docker module and feature

* download alpine squashfs automatically

* overlays effect

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* chore: Add the overlay effect

* feat: Add the mounter in the main

* chore: Convert to use the mounts, still need to work with the sandbox

* install fixes

* fix ssl

* fixes from testing

* implement tmpfile for upload

* wip

* misc fixes

* cleanup

* cleanup

* better progress reporting

* progress for sideload

* return real guid

* add devmode script

* fix lxc rootfs path

* fix percentage bar

* fix progress bar styling

* fix build for unstable

* tweaks

* label progress

* tweaks

* update progress more often

* make symlink in rpc_client

* make socket dir

* fix parent path

* add start-cli to container

* add echo and gitInfo commands

* wip: Add the init + errors

* chore: Add in the exit effect for the system

* chore: Change the type to null for failure to parse

* move sigterm timeout to stopping status

* update order

* chore: Update the return type

* remove dbg

* change the map error

* chore: Update the thing to capture id

* chore add some life changes

* chore: Update the loging

* chore: Update the package to run module

* us From for RpcError

* chore: Update to use import instead

* chore: update

* chore: Use require for the backup

* fix a default

* update the type that is wrong

* chore: Update the type of the manifest

* chore: Update to make null

* only symlink if not exists

* get rid of double result

* better debug info for ErrorCollection

* chore: Update effects

* chore: fix

* mount assets and volumes

* add exec instead of spawn

* fix mounting in image

* fix overlay mounts

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* misc fixes

* feat: Fix two

* fix: systemForEmbassy main

* chore: Fix small part of main loop

* chore: Modify the bundle

* merge

* fixMain loop"

* move tsc to makefile

* chore: Update the return types of the health check

* fix client

* chore: Convert the todo to use tsmatches

* add in the fixes for the seen and create the hack to allow demo

* chore: Update to include the systemForStartOs

* chore UPdate to the latest types from the expected outout

* fixes

* fix typo

* Don't emit if failure on tsc

* wip

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* add s9pk api

* add inspection

* add inspect manifest

* newline after display serializable

* fix squashfs in image name

* edit manifest

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* wait for response on repl

* ignore sig for now

* ignore sig for now

* re-enable sig verification

* fix

* wip

* env and chroot

* add profiling logs

* set uid & gid in squashfs to 100000

* set uid of sqfs to 100000

* fix mksquashfs args

* add env to compat

* fix

* re-add docker feature flag

* fix docker output format being stupid

* here be dragons

* chore: Add in the cross compiling for something

* fix npm link

* extract logs from container on exit

* chore: Update for testing

* add log capture to drop trait

* chore: add in the modifications that I make

* chore: Update small things for no updates

* chore: Update the types of something

* chore: Make main not complain

* idmapped mounts

* idmapped volumes

* re-enable kiosk

* chore: Add in some logging for the new system

* bring in start-sdk

* remove avahi

* chore: Update the deps

* switch to musl

* chore: Update the version of prettier

* chore: Organize'

* chore: Update some of the headers back to the standard of fetch

* fix musl build

* fix idmapped mounts

* fix cross build

* use cross compiler for correct arch

* feat: Add in the faked ssl stuff for the effects

* @dr_bonez Did a solution here

* chore: Something that DrBonez

* chore: up

* wip: We have a working server!!!

* wip

* uninstall

* wip

* tes

---------

Co-authored-by: J H <dragondef@gmail.com>
Co-authored-by: J H <Blu-J@users.noreply.github.com>
Co-authored-by: J H <2364004+Blu-J@users.noreply.github.com>
This commit is contained in:
Aiden McClelland
2024-02-17 11:14:14 -07:00
committed by GitHub
parent 65009e2f69
commit fab13db4b4
326 changed files with 31708 additions and 13987 deletions

View File

@@ -12,9 +12,6 @@ on:
- dev
- unstable
- dev-unstable
- docker
- dev-docker
- dev-unstable-docker
runner:
type: choice
description: Runner

3
.gitignore vendored
View File

@@ -28,4 +28,5 @@ secrets.db
/dpkg-workdir
/compiled.tar
/compiled-*.tar
/firmware
/firmware
/tmp

View File

@@ -6,10 +6,10 @@ BASENAME := $(shell ./basename.sh)
PLATFORM := $(shell if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi)
IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi)
BINS := core/target/$(ARCH)-unknown-linux-gnu/release/startbox core/target/aarch64-unknown-linux-musl/release/container-init core/target/x86_64-unknown-linux-musl/release/container-init
BINS := core/target/$(ARCH)-unknown-linux-musl/release/startbox core/target/$(ARCH)-unknown-linux-musl/release/containerbox
WEB_UIS := web/dist/raw/ui web/dist/raw/setup-wizard web/dist/raw/diagnostic-ui web/dist/raw/install-wizard
FIRMWARE_ROMS := ./firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json)
BUILD_SRC := $(shell git ls-files build) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
BUILD_SRC := $(shell git ls-files build) build/lib/depends build/lib/conflicts build/lib/container-runtime/rootfs.squashfs $(FIRMWARE_ROMS)
DEBIAN_SRC := $(shell git ls-files debian/)
IMAGE_RECIPE_SRC := $(shell git ls-files image-recipe/)
STARTD_SRC := core/startos/startd.service $(BUILD_SRC)
@@ -26,7 +26,7 @@ PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client
GZIP_BIN := $(shell which pigz || which gzip)
TAR_BIN := $(shell which gtar || which tar)
COMPILED_TARGETS := $(BINS) system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar
ALL_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep; fi) $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then echo cargo-deps/$(ARCH)-unknown-linux-gnu/release/tokio-console; fi') $(PLATFORM_FILE)
ALL_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; fi) $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then echo cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console; fi') $(PLATFORM_FILE)
ifeq ($(REMOTE),)
mkdir = mkdir -p $1
@@ -49,7 +49,7 @@ endif
.DELETE_ON_ERROR:
.PHONY: all metadata install clean format sdk snapshots uis ui reflash deb $(IMAGE_TYPE) squashfs sudo wormhole test
.PHONY: all metadata install clean format cli uis ui reflash deb $(IMAGE_TYPE) squashfs sudo wormhole test
all: $(ALL_TARGETS)
@@ -74,6 +74,11 @@ clean:
rm -rf image-recipe/deb
rm -rf results
rm -rf build/lib/firmware
rm -rf container-runtime/dist
rm -rf container-runtime/node_modules
rm -f build/lib/container-runtime/rootfs.squashfs
rm -rf sdk/dist
rm -rf sdk/node_modules
rm -f ENVIRONMENT.txt
rm -f PLATFORM.txt
rm -f GIT_HASH.txt
@@ -85,8 +90,8 @@ format:
test: $(CORE_SRC) $(ENVIRONMENT_FILE)
cd core && cargo build && cargo test
sdk:
cd core && ./install-sdk.sh
cli:
cd core && ./install-cli.sh
deb: results/$(BASENAME).deb
@@ -106,15 +111,13 @@ results/$(BASENAME).$(IMAGE_TYPE) results/$(BASENAME).squashfs: $(IMAGE_RECIPE_S
# For creating os images. DO NOT USE
install: $(ALL_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,core/target/$(ARCH)-unknown-linux-gnu/release/startbox,$(DESTDIR)/usr/bin/startbox)
$(call cp,core/target/$(ARCH)-unknown-linux-musl/release/startbox,$(DESTDIR)/usr/bin/startbox)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-sdk)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-deno)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/avahi-alias)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/embassy-cli)
if [ "$(PLATFORM)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then $(call cp,cargo-deps/$(ARCH)-unknown-linux-gnu/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); fi
if [ "$(PLATFORM)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-musl/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then $(call cp,cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); fi
$(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/startd.service,$(DESTDIR)/lib/systemd/system/startd.service)
@@ -128,10 +131,6 @@ install: $(ALL_TARGETS)
$(call cp,GIT_HASH.txt,$(DESTDIR)/usr/lib/startos/GIT_HASH.txt)
$(call cp,VERSION.txt,$(DESTDIR)/usr/lib/startos/VERSION.txt)
$(call mkdir,$(DESTDIR)/usr/lib/startos/container)
$(call cp,core/target/aarch64-unknown-linux-musl/release/container-init,$(DESTDIR)/usr/lib/startos/container/container-init.arm64)
$(call cp,core/target/x86_64-unknown-linux-musl/release/container-init,$(DESTDIR)/usr/lib/startos/container/container-init.amd64)
$(call mkdir,$(DESTDIR)/usr/lib/startos/system-images)
$(call cp,system-images/compat/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/compat.tar)
$(call cp,system-images/utils/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/utils.tar)
@@ -148,8 +147,8 @@ update-overlay: $(ALL_TARGETS)
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) PLATFORM=$(PLATFORM)
$(call ssh,"sudo systemctl start startd")
wormhole: core/target/$(ARCH)-unknown-linux-gnu/release/startbox
@wormhole send core/target/$(ARCH)-unknown-linux-gnu/release/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
wormhole: core/target/$(ARCH)-unknown-linux-musl/release/startbox
@wormhole send core/target/$(ARCH)-unknown-linux-musl/release/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
update: $(ALL_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@@ -166,6 +165,26 @@ emulate-reflash: $(ALL_TARGETS)
upload-ota: results/$(BASENAME).squashfs
TARGET=$(TARGET) KEY=$(KEY) ./upload-ota.sh
container-runtime/alpine.squashfs: $(PLATFORM_FILE)
ARCH=$(ARCH) ./container-runtime/download-base-image.sh
container-runtime/node_modules: container-runtime/package.json container-runtime/package-lock.json sdk/dist
npm --prefix container-runtime ci
touch container-runtime/node_modules
sdk/dist: $(shell git ls-files sdk)
(cd sdk && make bundle)
container-runtime/dist: container-runtime/node_modules $(shell git ls-files container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
npm --prefix container-runtime run build
container-runtime/dist/node_modules container-runtime/dist/package.json container-runtime/dist/package-lock.json: container-runtime/package.json container-runtime/package-lock.json sdk/dist container-runtime/install-dist-deps.sh
./container-runtime/install-dist-deps.sh
touch container-runtime/dist/node_modules
build/lib/container-runtime/rootfs.squashfs: container-runtime/alpine.squashfs container-runtime/containerRuntime.rc container-runtime/update-image.sh container-runtime/dist container-runtime/dist/node_modules core/target/$(ARCH)-unknown-linux-musl/release/containerbox $(PLATFORM_FILE) | sudo
ARCH=$(ARCH) ./container-runtime/update-image.sh
build/lib/depends build/lib/conflicts: build/dpkg-deps/*
build/dpkg-deps/generate.sh
@@ -181,10 +200,6 @@ system-images/utils/docker-images/$(ARCH).tar: $(UTILS_SRC)
system-images/binfmt/docker-images/$(ARCH).tar: $(BINFMT_SRC)
cd system-images/binfmt && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
snapshots: core/snapshot-creator/Cargo.toml
cd core/ && ARCH=aarch64 ./build-v8-snapshot.sh
cd core/ && ARCH=x86_64 ./build-v8-snapshot.sh
$(BINS): $(CORE_SRC) $(ENVIRONMENT_FILE)
cd core && ARCH=$(ARCH) ./build-prod.sh
touch $(BINS)
@@ -231,8 +246,8 @@ uis: $(WEB_UIS)
# this is a convenience step to build the UI
ui: web/dist/raw/ui
cargo-deps/aarch64-unknown-linux-gnu/release/pi-beep:
cargo-deps/aarch64-unknown-linux-musl/release/pi-beep:
ARCH=aarch64 ./build-cargo-dep.sh pi-beep
cargo-deps/$(ARCH)-unknown-linux-gnu/release/tokio-console:
cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console:
ARCH=$(ARCH) ./build-cargo-dep.sh tokio-console

View File

@@ -18,8 +18,8 @@ if [ -z "$ARCH" ]; then
fi
mkdir -p cargo-deps
alias 'rust-arm64-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$(pwd)"/cargo-deps:/home/rust/src -P start9/rust-arm-cross:aarch64'
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
rust-arm64-builder cargo install "$1" --target-dir /home/rust/src --target=$ARCH-unknown-linux-gnu
rust-musl-builder cargo install "$1" --target-dir /home/rust/src --target=$ARCH-unknown-linux-musl
sudo chown -R $USER cargo-deps
sudo chown -R $USER ~/.cargo

5
build/.gitignore vendored
View File

@@ -1,2 +1,3 @@
lib/depends
lib/conflicts
/lib/depends
/lib/conflicts
/lib/container-runtime/rootfs.squashfs

View File

@@ -20,12 +20,12 @@ httpdirfs
iotop
iw
jq
libavahi-client3
libyajl2
linux-cpupower
lm-sensors
lshw
lvm2
lxc
magic-wormhole
man-db
ncdu

View File

@@ -1,5 +0,0 @@
+ containerd.io
+ docker-ce
+ docker-ce-cli
+ docker-compose-plugin
- podman

View File

@@ -1,13 +1,13 @@
[
{
"id": "pureboot-librem_mini_v2-basic_usb_autoboot_blob_jail-Release-28.3",
"id": "pureboot-librem_mini_v2-basic_usb_autoboot_blob_jail-Release-29",
"platform": ["x86_64"],
"system-product-name": "librem_mini_v2",
"bios-version": {
"semver-prefix": "PureBoot-Release-",
"semver-range": "<28.3"
"semver-range": "<29"
},
"url": "https://source.puri.sm/firmware/releases/-/raw/master/librem_mini_v2/custom/pureboot-librem_mini_v2-basic_usb_autoboot_blob_jail-Release-28.3.rom.gz",
"shasum": "5019bcf53f7493c7aa74f8ef680d18b5fc26ec156c705a841433aaa2fdef8f35"
"url": "https://source.puri.sm/firmware/releases/-/raw/master/librem_mini_v2/custom/pureboot-librem_mini_v2-basic_usb_autoboot_blob_jail-Release-29.rom.gz",
"shasum": "96ec04f21b1cfe8e28d9a2418f1ff533efe21f9bbbbf16e162f7c814761b068b"
}
]

View File

@@ -3,4 +3,6 @@ dist/
bundle.js
startInit.js
service/
service.js
service.js
alpine.squashfs
/tmp

View File

@@ -0,0 +1,4 @@
FROM node:18-alpine
ADD ./startInit.js /usr/local/lib/startInit.js
ADD ./entrypoint.sh /usr/local/bin/entrypoint.sh

View File

@@ -0,0 +1,59 @@
# Container RPC SERVER Specification
## Methods
### init
initialize runtime (mount `/proc`, `/sys`, `/dev`, and `/run` to each image in `/media/images`)
called after os has mounted js and images to the container
#### args
`[]`
#### response
`null`
### exit
shutdown runtime
#### args
`[]`
#### response
`null`
### start
run main method if not already running
#### args
`[]`
#### response
`null`
### stop
stop main method by sending SIGTERM to child processes, and SIGKILL after timeout
#### args
`{ timeout: millis }`
#### response
`null`
### execute
run a specific package procedure
#### args
```ts
{
procedure: JsonPath,
input: any,
timeout: millis,
}
```
#### response
`any`
### sandbox
run a specific package procedure in sandbox mode
#### args
```ts
{
procedure: JsonPath,
input: any,
timeout: millis,
}
```
#### response
`any`

View File

@@ -0,0 +1,10 @@
#!/sbin/openrc-run
name=containerRuntime
#cfgfile="/etc/containerRuntime/containerRuntime.conf"
command="/usr/bin/node"
command_args="--experimental-detect-module --unhandled-rejections=warn /usr/lib/startos/init/index.js"
pidfile="/run/containerRuntime.pid"
command_background="yes"
output_log="/var/log/containerRuntime.log"
error_log="/var/log/containerRuntime.err"

View File

@@ -0,0 +1,18 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
DISTRO=alpine
VERSION=3.19
ARCH=${ARCH:-$(uname -m)}
FLAVOR=default
if [ "$ARCH" = "x86_64" ]; then
ARCH=amd64
elif [ "$ARCH" = "aarch64" ]; then
ARCH=arm64
fi
curl https://images.linuxcontainers.org/$(curl --silent https://images.linuxcontainers.org/meta/1.0/index-system | grep "^$DISTRO;$VERSION;$ARCH;$FLAVOR;" | head -n1 | sed 's/^.*;//g')/rootfs.squashfs --output alpine.squashfs

View File

@@ -1,22 +0,0 @@
export class CallbackHolder {
constructor() {
}
private root = (Math.random() + 1).toString(36).substring(7);
private inc = 0
private callbacks = new Map<string, Function>()
private newId() {
return this.root + (this.inc++).toString(36)
}
addCallback(callback: Function) {
return this.callbacks.set(this.newId(), callback);
}
callCallback(index: string, args: any[]): Promise<unknown> {
const callback = this.callbacks.get(index)
if (!callback) throw new Error(`Callback ${index} does not exist`)
this.callbacks.delete(index)
return Promise.resolve().then(() => callback(...args))
}
}

View File

@@ -1,184 +0,0 @@
import * as T from "@start9labs/start-sdk/lib/types"
import * as net from "net"
import { CallbackHolder } from "./CallbackHolder"
const SOCKET_PATH = "/start9/sockets/startDaemon.sock"
const MAIN = "main" as const
export class Effects implements T.Effects {
constructor(readonly method: string, readonly callbackHolder: CallbackHolder) {}
id = 0
rpcRound(method: string, params: unknown) {
const id = this.id++;
const client = net.createConnection(SOCKET_PATH, () => {
client.write(JSON.stringify({
id,
method,
params
}));
});
return new Promise((resolve, reject) => {
client.on('data', (data) => {
try {
resolve(JSON.parse(data.toString())?.result)
} catch (error) {
reject(error)
}
client.end();
});
})
}
started= this.method !== MAIN ? null : ()=> {
return this.rpcRound('started', null)
}
bind(...[options]: Parameters<T.Effects["bind"]>) {
return this.rpcRound('bind', (options)) as ReturnType<T.Effects["bind"]>
}
clearBindings(...[]: Parameters<T.Effects["clearBindings"]>) {
return this.rpcRound('clearBindings', null) as ReturnType<T.Effects["clearBindings"]>
}
clearNetworkInterfaces(
...[]: Parameters<T.Effects["clearNetworkInterfaces"]>
) {
return this.rpcRound('clearNetworkInterfaces', null) as ReturnType<T.Effects["clearNetworkInterfaces"]>
}
executeAction(...[options]: Parameters<T.Effects["executeAction"]>) {
return this.rpcRound('executeAction', options) as ReturnType<T.Effects["executeAction"]>
}
exists(...[packageId]: Parameters<T.Effects["exists"]>) {
return this.rpcRound('exists', packageId) as ReturnType<T.Effects["exists"]>
}
exportAction(...[options]: Parameters<T.Effects["exportAction"]>) {
return this.rpcRound('exportAction', (options)) as ReturnType<T.Effects["exportAction"]>
}
exportNetworkInterface(
...[options]: Parameters<T.Effects["exportNetworkInterface"]>
) {
return this.rpcRound('exportNetworkInterface', (options)) as ReturnType<T.Effects["exportNetworkInterface"]>
}
exposeForDependents(...[options]: any) {
return this.rpcRound('exposeForDependents', (null)) as ReturnType<T.Effects["exposeForDependents"]>
}
exposeUi(...[options]: Parameters<T.Effects["exposeUi"]>) {
return this.rpcRound('exposeUi', (options)) as ReturnType<T.Effects["exposeUi"]>
}
getConfigured(...[]: Parameters<T.Effects["getConfigured"]>) {
return this.rpcRound('getConfigured',null) as ReturnType<T.Effects["getConfigured"]>
}
getContainerIp(...[]: Parameters<T.Effects["getContainerIp"]>) {
return this.rpcRound('getContainerIp', null) as ReturnType<T.Effects["getContainerIp"]>
}
getHostnames: any = (...[allOptions]: any[]) => {
const options = {
...allOptions,
callback: this.callbackHolder.addCallback(allOptions.callback)
}
return this.rpcRound('getHostnames', options) as ReturnType<T.Effects["getHostnames"]>
}
getInterface(...[options]: Parameters<T.Effects["getInterface"]>) {
return this.rpcRound('getInterface', {...options, callback: this.callbackHolder.addCallback(options.callback)}) as ReturnType<T.Effects["getInterface"]>
}
getIPHostname(...[]: Parameters<T.Effects["getIPHostname"]>) {
return this.rpcRound('getIPHostname', (null)) as ReturnType<T.Effects["getIPHostname"]>
}
getLocalHostname(...[]: Parameters<T.Effects["getLocalHostname"]>) {
return this.rpcRound('getLocalHostname', null) as ReturnType<T.Effects["getLocalHostname"]>
}
getPrimaryUrl(...[options]: Parameters<T.Effects["getPrimaryUrl"]>) {
return this.rpcRound('getPrimaryUrl', {...options, callback: this.callbackHolder.addCallback(options.callback)}) as ReturnType<T.Effects["getPrimaryUrl"]>
}
getServicePortForward(
...[options]: Parameters<T.Effects["getServicePortForward"]>
) {
return this.rpcRound('getServicePortForward', (options)) as ReturnType<T.Effects["getServicePortForward"]>
}
getServiceTorHostname(
...[interfaceId, packageId]: Parameters<T.Effects["getServiceTorHostname"]>
) {
return this.rpcRound('getServiceTorHostname', ({interfaceId, packageId})) as ReturnType<T.Effects["getServiceTorHostname"]>
}
getSslCertificate(...[packageId, algorithm]: Parameters<T.Effects["getSslCertificate"]>) {
return this.rpcRound('getSslCertificate', ({packageId, algorithm})) as ReturnType<T.Effects["getSslCertificate"]>
}
getSslKey(...[packageId, algorithm]: Parameters<T.Effects["getSslKey"]>) {
return this.rpcRound('getSslKey', ({packageId, algorithm})) as ReturnType<T.Effects["getSslKey"]>
}
getSystemSmtp(...[options]: Parameters<T.Effects["getSystemSmtp"]>) {
return this.rpcRound('getSystemSmtp', {...options, callback: this.callbackHolder.addCallback(options.callback)}) as ReturnType<T.Effects["getSystemSmtp"]>
}
is_sandboxed(...[]: Parameters<T.Effects["is_sandboxed"]>) {
return this.rpcRound('is_sandboxed', (null)) as ReturnType<T.Effects["is_sandboxed"]>
}
listInterface(...[options]: Parameters<T.Effects["listInterface"]>) {
return this.rpcRound('listInterface', {...options, callback: this.callbackHolder.addCallback(options.callback)}) as ReturnType<T.Effects["listInterface"]>
}
mount(...[options]: Parameters<T.Effects["mount"]>) {
return this.rpcRound('mount', options) as ReturnType<T.Effects["mount"]>
}
removeAction(...[options]: Parameters<T.Effects["removeAction"]>) {
return this.rpcRound('removeAction', options) as ReturnType<T.Effects["removeAction"]>
}
removeAddress(...[options]: Parameters<T.Effects["removeAddress"]>) {
return this.rpcRound('removeAddress', options) as ReturnType<T.Effects["removeAddress"]>
}
restart(...[]: Parameters<T.Effects["restart"]>) {
this.rpcRound('restart', null)
}
reverseProxy(...[options]: Parameters<T.Effects["reverseProxy"]>) {
return this.rpcRound('reverseProxy', options) as ReturnType<T.Effects["reverseProxy"]>
}
running(...[packageId]: Parameters<T.Effects["running"]>) {
return this.rpcRound('running', {packageId}) as ReturnType<T.Effects["running"]>
}
// runRsync(...[options]: Parameters<T.Effects[""]>) {
//
// return this.rpcRound('executeAction', options) as ReturnType<T.Effects["executeAction"]>
//
// return this.rpcRound('executeAction', options) as ReturnType<T.Effects["executeAction"]>
// }
setConfigured(...[configured]: Parameters<T.Effects["setConfigured"]>) {
return this.rpcRound('setConfigured', {configured}) as ReturnType<T.Effects["setConfigured"]>
}
setDependencies(...[dependencies]: Parameters<T.Effects["setDependencies"]>) {
return this.rpcRound('setDependencies', {dependencies}) as ReturnType<T.Effects["setDependencies"]>
}
setHealth(...[options]: Parameters<T.Effects["setHealth"]>) {
return this.rpcRound('setHealth', options) as ReturnType<T.Effects["setHealth"]>
}
shutdown(...[]: Parameters<T.Effects["shutdown"]>) {
return this.rpcRound('shutdown', null)
}
stopped(...[packageId]: Parameters<T.Effects["stopped"]>) {
return this.rpcRound('stopped', {packageId}) as ReturnType<T.Effects["stopped"]>
}
store: T.Effects['store'] = {
get:(options) => this.rpcRound('getStore', {...options, callback: this.callbackHolder.addCallback(options.callback)}) as ReturnType<T.Effects["store"]['get']>,
set:(options) => this.rpcRound('setStore', options) as ReturnType<T.Effects["store"]['set']>
}
}

View File

@@ -1,177 +0,0 @@
// @ts-check
import * as net from "net"
import {
object,
some,
string,
literal,
array,
number,
matches,
} from "ts-matches"
import { Effects } from "./Effects"
import { CallbackHolder } from "./CallbackHolder"
import * as CP from "child_process"
import * as Mod from "module"
const SOCKET_PATH = "/start9/sockets/rpc.sock"
const LOCATION_OF_SERVICE_JS = "/services/service.js"
const childProcesses = new Map<number, CP.ChildProcess[]>()
let childProcessIndex = 0
const require = Mod.prototype.require
const setupRequire = () => {
const requireChildProcessIndex = childProcessIndex++
// @ts-ignore
Mod.prototype.require = (name, ...rest) => {
if (["child_process", "node:child_process"].indexOf(name) !== -1) {
return {
exec(...args: any[]) {
const returning = CP.exec.apply(null, args as any)
const childProcessArray =
childProcesses.get(requireChildProcessIndex) ?? []
childProcessArray.push(returning)
childProcesses.set(requireChildProcessIndex, childProcessArray)
return returning
},
execFile(...args: any[]) {
const returning = CP.execFile.apply(null, args as any)
const childProcessArray =
childProcesses.get(requireChildProcessIndex) ?? []
childProcessArray.push(returning)
childProcesses.set(requireChildProcessIndex, childProcessArray)
return returning
},
execFileSync: CP.execFileSync,
execSync: CP.execSync,
fork(...args: any[]) {
const returning = CP.fork.apply(null, args as any)
const childProcessArray =
childProcesses.get(requireChildProcessIndex) ?? []
childProcessArray.push(returning)
childProcesses.set(requireChildProcessIndex, childProcessArray)
return returning
},
spawn(...args: any[]) {
const returning = CP.spawn.apply(null, args as any)
const childProcessArray =
childProcesses.get(requireChildProcessIndex) ?? []
childProcessArray.push(returning)
childProcesses.set(requireChildProcessIndex, childProcessArray)
return returning
},
spawnSync: CP.spawnSync,
} as typeof CP
}
console.log("require", name)
return require(name, ...rest)
}
return requireChildProcessIndex
}
const cleanupRequire = (requireChildProcessIndex: number) => {
const foundChildren = childProcesses.get(requireChildProcessIndex)
if (!foundChildren) return
childProcesses.delete(requireChildProcessIndex)
foundChildren.forEach((x) => x.kill())
}
const idType = some(string, number)
const runType = object({
id: idType,
method: literal("run"),
params: object({
methodName: string.map((x) => {
const splitValue = x.split("/")
if (splitValue.length === 1)
throw new Error(`X (${x}) is not a valid path`)
return splitValue.slice(1)
}),
methodArgs: object,
}),
})
const callbackType = object({
id: idType,
method: literal("callback"),
params: object({
callback: string,
args: array,
}),
})
const dealWithInput = async (callbackHolder: CallbackHolder, input: unknown) =>
matches(input)
.when(runType, async ({ id, params: { methodName, methodArgs } }) => {
const index = setupRequire()
const effects = new Effects(`/${methodName.join("/")}`, callbackHolder)
// @ts-ignore
return import(LOCATION_OF_SERVICE_JS)
.then((x) => methodName.reduce(reduceMethod(methodArgs, effects), x))
.then()
.then((result) => ({ id, result }))
.catch((error) => ({
id,
error: { message: error?.message ?? String(error) },
}))
.finally(() => cleanupRequire(index))
})
.when(callbackType, async ({ id, params: { callback, args } }) =>
Promise.resolve(callbackHolder.callCallback(callback, args))
.then((result) => ({ id, result }))
.catch((error) => ({
id,
error: { message: error?.message ?? String(error) },
})),
)
.defaultToLazy(() => {
console.warn(`Coudln't parse the following input ${input}`)
return {
error: { message: "Could not figure out shape" },
}
})
const jsonParse = (x: Buffer) => JSON.parse(x.toString())
export class Runtime {
unixSocketServer = net.createServer(async (server) => {})
private callbacks = new CallbackHolder()
constructor() {
this.unixSocketServer.listen(SOCKET_PATH)
this.unixSocketServer.on("connection", (s) => {
s.on("data", (a) =>
Promise.resolve(a)
.then(jsonParse)
.then(dealWithInput.bind(null, this.callbacks))
.then((x) => {
console.log("x", JSON.stringify(x), typeof x)
return x
})
.catch((error) => ({
error: { message: error?.message ?? String(error) },
}))
.then(JSON.stringify)
.then((x) => new Promise((resolve) => s.write("" + x, resolve)))
.finally(() => void s.end()),
)
})
}
}
function reduceMethod(
methodArgs: object,
effects: Effects,
): (previousValue: any, currentValue: string) => any {
return (x: any, method: string) =>
Promise.resolve(x)
.then((x) => x[method])
.then((x) =>
typeof x !== "function"
? x
: x({
...methodArgs,
effects,
}),
)
}

View File

@@ -0,0 +1,10 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
cat ./package.json | sed 's/file:\.\([.\/]\)/file:..\/.\1/g' > ./dist/package.json
cat ./package-lock.json | sed 's/"\.\([.\/]\)/"..\/.\1/g' > ./dist/package-lock.json
npm --prefix dist ci --omit=dev

View File

@@ -0,0 +1,28 @@
#!/bin/bash
set -e
IMAGE=$1
if [ -z "$IMAGE" ]; then
>&2 echo "usage: $0 <image id>"
exit 1
fi
if ! [ -d "/media/images/$IMAGE" ]; then
>&2 echo "image does not exist"
exit 1
fi
container=$(mktemp -d)
mkdir -p $container/rootfs $container/upper $container/work
mount -t overlay -olowerdir=/media/images/$IMAGE,upperdir=$container/upper,workdir=$container/work overlay $container/rootfs
rootfs=$container/rootfs
for special in dev sys proc run; do
mkdir -p $rootfs/$special
mount --bind /$special $rootfs/$special
done
echo $rootfs

File diff suppressed because it is too large Load Diff

View File

@@ -2,10 +2,11 @@
"name": "start-init",
"version": "0.0.0",
"description": "We want to be the sdk intermitent for the system",
"module": "./index.js",
"scripts": {
"bundle:esbuild": "esbuild initSrc/index.ts --platform=node --bundle --outfile=startInit.js",
"bundle:service": "esbuild /service/startos/procedures/index.ts --platform=node --bundle --outfile=service.js",
"run:manifest": "esbuild /service/startos/procedures/index.ts --platform=node --bundle --outfile=service.js"
"check": "tsc --noEmit",
"build": "prettier --write '**/*.ts' && rm -rf dist && tsc",
"tsc": "rm -rf dist; tsc"
},
"author": "",
"prettier": {
@@ -16,11 +17,11 @@
},
"dependencies": {
"@iarna/toml": "^2.2.5",
"@start9labs/start-sdk": "=0.4.0-rev0.lib0.rc8.alpha3",
"esbuild": "0.18.4",
"@start9labs/start-sdk": "file:../sdk/dist",
"esbuild-plugin-resolve": "^2.0.0",
"filebrowser": "^1.0.0",
"isomorphic-fetch": "^3.0.0",
"node-fetch": "^3.1.0",
"ts-matches": "^5.4.1",
"tslib": "^2.5.3",
"typescript": "^5.1.3",
@@ -29,8 +30,8 @@
"devDependencies": {
"@swc/cli": "^0.1.62",
"@swc/core": "^1.3.65",
"@types/node": "^20.2.5",
"prettier": "^2.8.8",
"rollup": "^3.25.1"
"@types/node": "^20.11.13",
"prettier": "^3.2.5",
"typescript": ">5.2"
}
}

View File

@@ -0,0 +1,12 @@
#!/bin/bash
set -e
rootfs=$1
if [ -z "$rootfs" ]; then
>&2 echo "usage: $0 <container rootfs path>"
exit 1
fi
umount --recursive $rootfs
rm -rf $rootfs/..

View File

@@ -0,0 +1,320 @@
import { types as T } from "@start9labs/start-sdk"
import * as net from "net"
import { object, string, number, literals, some, unknown } from "ts-matches"
import { Effects } from "../Models/Effects"
import { CallbackHolder } from "../Models/CallbackHolder"
const matchRpcError = object({
error: object(
{
code: number,
message: string,
data: some(
string,
object(
{
details: string,
debug: string,
},
["debug"],
),
),
},
["data"],
),
})
const testRpcError = matchRpcError.test
const testRpcResult = object({
result: unknown,
}).test
type RpcError = typeof matchRpcError._TYPE
const SOCKET_PATH = "/media/startos/rpc/host.sock"
const MAIN = "/main" as const
export class HostSystemStartOs implements Effects {
static of(callbackHolder: CallbackHolder) {
return new HostSystemStartOs(callbackHolder)
}
constructor(readonly callbackHolder: CallbackHolder) {}
id = 0
rpcRound(method: string, params: unknown) {
const id = this.id++
const client = net.createConnection({ path: SOCKET_PATH }, () => {
client.write(
JSON.stringify({
id,
method,
params,
}) + "\n",
)
})
let bufs: Buffer[] = []
return new Promise((resolve, reject) => {
client.on("data", (data) => {
try {
bufs.push(data)
if (data.reduce((acc, x) => acc || x == 10, false)) {
const res: unknown = JSON.parse(
Buffer.concat(bufs).toString().split("\n")[0],
)
if (testRpcError(res)) {
let message = res.error.message
console.error({ method, params, hostSystemStartOs: true })
if (string.test(res.error.data)) {
message += ": " + res.error.data
console.error(res.error.data)
} else {
if (res.error.data?.details) {
message += ": " + res.error.data.details
console.error(res.error.data.details)
}
if (res.error.data?.debug) {
message += "\n" + res.error.data.debug
console.error("Debug: " + res.error.data.debug)
}
}
reject(new Error(message))
} else if (testRpcResult(res)) {
resolve(res.result)
} else {
reject(new Error(`malformed response ${JSON.stringify(res)}`))
}
}
} catch (error) {
reject(error)
}
client.end()
})
client.on("error", (error) => {
reject(error)
})
})
}
started =
// @ts-ignore
this.method !== MAIN
? null
: () => {
return this.rpcRound("started", null)
}
bind(...[options]: Parameters<T.Effects["bind"]>) {
return this.rpcRound("bind", options) as ReturnType<T.Effects["bind"]>
}
clearBindings(...[]: Parameters<T.Effects["clearBindings"]>) {
return this.rpcRound("clearBindings", null) as ReturnType<
T.Effects["clearBindings"]
>
}
clearNetworkInterfaces(
...[]: Parameters<T.Effects["clearNetworkInterfaces"]>
) {
return this.rpcRound("clearNetworkInterfaces", null) as ReturnType<
T.Effects["clearNetworkInterfaces"]
>
}
createOverlayedImage(options: { imageId: string }): Promise<string> {
return this.rpcRound("createOverlayedImage", options) as ReturnType<
T.Effects["createOverlayedImage"]
>
}
executeAction(...[options]: Parameters<T.Effects["executeAction"]>) {
return this.rpcRound("executeAction", options) as ReturnType<
T.Effects["executeAction"]
>
}
exists(...[packageId]: Parameters<T.Effects["exists"]>) {
return this.rpcRound("exists", packageId) as ReturnType<T.Effects["exists"]>
}
exportAction(...[options]: Parameters<T.Effects["exportAction"]>) {
return this.rpcRound("exportAction", options) as ReturnType<
T.Effects["exportAction"]
>
}
exportNetworkInterface(
...[options]: Parameters<T.Effects["exportNetworkInterface"]>
) {
return this.rpcRound("exportNetworkInterface", options) as ReturnType<
T.Effects["exportNetworkInterface"]
>
}
exposeForDependents(...[options]: any) {
return this.rpcRound("exposeForDependents", null) as ReturnType<
T.Effects["exposeForDependents"]
>
}
exposeUi(...[options]: Parameters<T.Effects["exposeUi"]>) {
return this.rpcRound("exposeUi", options) as ReturnType<
T.Effects["exposeUi"]
>
}
getConfigured(...[]: Parameters<T.Effects["getConfigured"]>) {
return this.rpcRound("getConfigured", null) as ReturnType<
T.Effects["getConfigured"]
>
}
getContainerIp(...[]: Parameters<T.Effects["getContainerIp"]>) {
return this.rpcRound("getContainerIp", null) as ReturnType<
T.Effects["getContainerIp"]
>
}
getHostnames: any = (...[allOptions]: any[]) => {
const options = {
...allOptions,
callback: this.callbackHolder.addCallback(allOptions.callback),
}
return this.rpcRound("getHostnames", options) as ReturnType<
T.Effects["getHostnames"]
>
}
getInterface(...[options]: Parameters<T.Effects["getInterface"]>) {
return this.rpcRound("getInterface", {
...options,
callback: this.callbackHolder.addCallback(options.callback),
}) as ReturnType<T.Effects["getInterface"]>
}
getIPHostname(...[]: Parameters<T.Effects["getIPHostname"]>) {
return this.rpcRound("getIPHostname", null) as ReturnType<
T.Effects["getIPHostname"]
>
}
getLocalHostname(...[]: Parameters<T.Effects["getLocalHostname"]>) {
return this.rpcRound("getLocalHostname", null) as ReturnType<
T.Effects["getLocalHostname"]
>
}
getPrimaryUrl(...[options]: Parameters<T.Effects["getPrimaryUrl"]>) {
return this.rpcRound("getPrimaryUrl", {
...options,
callback: this.callbackHolder.addCallback(options.callback),
}) as ReturnType<T.Effects["getPrimaryUrl"]>
}
getServicePortForward(
...[options]: Parameters<T.Effects["getServicePortForward"]>
) {
return this.rpcRound("getServicePortForward", options) as ReturnType<
T.Effects["getServicePortForward"]
>
}
getServiceTorHostname(
...[interfaceId, packageId]: Parameters<T.Effects["getServiceTorHostname"]>
) {
return this.rpcRound("getServiceTorHostname", {
interfaceId,
packageId,
}) as ReturnType<T.Effects["getServiceTorHostname"]>
}
getSslCertificate(
...[packageId, algorithm]: Parameters<T.Effects["getSslCertificate"]>
) {
return this.rpcRound("getSslCertificate", {
packageId,
algorithm,
}) as ReturnType<T.Effects["getSslCertificate"]>
}
getSslKey(...[packageId, algorithm]: Parameters<T.Effects["getSslKey"]>) {
return this.rpcRound("getSslKey", { packageId, algorithm }) as ReturnType<
T.Effects["getSslKey"]
>
}
getSystemSmtp(...[options]: Parameters<T.Effects["getSystemSmtp"]>) {
return this.rpcRound("getSystemSmtp", {
...options,
callback: this.callbackHolder.addCallback(options.callback),
}) as ReturnType<T.Effects["getSystemSmtp"]>
}
listInterface(...[options]: Parameters<T.Effects["listInterface"]>) {
return this.rpcRound("listInterface", {
...options,
callback: this.callbackHolder.addCallback(options.callback),
}) as ReturnType<T.Effects["listInterface"]>
}
mount(...[options]: Parameters<T.Effects["mount"]>) {
return this.rpcRound("mount", options) as ReturnType<T.Effects["mount"]>
}
removeAction(...[options]: Parameters<T.Effects["removeAction"]>) {
return this.rpcRound("removeAction", options) as ReturnType<
T.Effects["removeAction"]
>
}
removeAddress(...[options]: Parameters<T.Effects["removeAddress"]>) {
return this.rpcRound("removeAddress", options) as ReturnType<
T.Effects["removeAddress"]
>
}
restart(...[]: Parameters<T.Effects["restart"]>) {
return this.rpcRound("restart", null)
}
reverseProxy(...[options]: Parameters<T.Effects["reverseProxy"]>) {
return this.rpcRound("reverseProxy", options) as ReturnType<
T.Effects["reverseProxy"]
>
}
running(...[packageId]: Parameters<T.Effects["running"]>) {
return this.rpcRound("running", { packageId }) as ReturnType<
T.Effects["running"]
>
}
// runRsync(...[options]: Parameters<T.Effects[""]>) {
//
// return this.rpcRound('executeAction', options) as ReturnType<T.Effects["executeAction"]>
//
// return this.rpcRound('executeAction', options) as ReturnType<T.Effects["executeAction"]>
// }
setConfigured(...[configured]: Parameters<T.Effects["setConfigured"]>) {
return this.rpcRound("setConfigured", { configured }) as ReturnType<
T.Effects["setConfigured"]
>
}
setDependencies(
...[dependencies]: Parameters<T.Effects["setDependencies"]>
): ReturnType<T.Effects["setDependencies"]> {
return this.rpcRound("setDependencies", { dependencies }) as ReturnType<
T.Effects["setDependencies"]
>
}
setHealth(...[options]: Parameters<T.Effects["setHealth"]>) {
return this.rpcRound("setHealth", options) as ReturnType<
T.Effects["setHealth"]
>
}
setMainStatus(o: { status: "running" | "stopped" }): Promise<void> {
return this.rpcRound("setMainStatus", o) as ReturnType<
T.Effects["setHealth"]
>
}
shutdown(...[]: Parameters<T.Effects["shutdown"]>) {
return this.rpcRound("shutdown", null)
}
stopped(...[packageId]: Parameters<T.Effects["stopped"]>) {
return this.rpcRound("stopped", { packageId }) as ReturnType<
T.Effects["stopped"]
>
}
store: T.Effects["store"] = {
get: async (options: any) =>
this.rpcRound("getStore", {
...options,
callback: this.callbackHolder.addCallback(options.callback),
}) as any,
set: async (options: any) =>
this.rpcRound("setStore", options) as ReturnType<
T.Effects["store"]["set"]
>,
}
/**
* So, this is created
* @param options
* @returns
*/
embassyGetInterface(options: {
target: "tor-key" | "tor-address" | "lan-address"
packageId: string
interface: string
}) {
return this.rpcRound("embassyGetInterface", options) as Promise<string>
}
}

View File

@@ -0,0 +1,303 @@
// @ts-check
import * as net from "net"
import {
object,
some,
string,
literal,
array,
number,
matches,
any,
shape,
} from "ts-matches"
import { types as T } from "@start9labs/start-sdk"
import * as CP from "child_process"
import * as Mod from "module"
import * as fs from "fs"
import { CallbackHolder } from "../Models/CallbackHolder"
import { AllGetDependencies } from "../Interfaces/AllGetDependencies"
import { HostSystem } from "../Interfaces/HostSystem"
import { jsonPath } from "../Models/JsonPath"
import { System } from "../Interfaces/System"
type MaybePromise<T> = T | Promise<T>
type SocketResponse = { jsonrpc: "2.0"; id: IdType } & (
| { result: unknown }
| {
error: {
code: number
message: string
data: { details: string; debug?: string }
}
}
)
const SOCKET_PARENT = "/media/startos/rpc"
const SOCKET_PATH = "/media/startos/rpc/service.sock"
const jsonrpc = "2.0" as const
const idType = some(string, number, literal(null))
type IdType = null | string | number
const runType = object({
id: idType,
method: literal("execute"),
params: object(
{
procedure: string,
input: any,
timeout: number,
},
["timeout"],
),
})
const sandboxRunType = object({
id: idType,
method: literal("sandbox"),
params: object(
{
procedure: string,
input: any,
timeout: number,
},
["timeout"],
),
})
const callbackType = object({
id: idType,
method: literal("callback"),
params: object({
callback: string,
args: array,
}),
})
const initType = object({
id: idType,
method: literal("init"),
})
const exitType = object({
id: idType,
method: literal("exit"),
})
const evalType = object({
id: idType,
method: literal("eval"),
params: object({
script: string,
}),
})
const jsonParse = (x: Buffer) => JSON.parse(x.toString())
function reduceMethod(
methodArgs: object,
effects: HostSystem,
): (previousValue: any, currentValue: string) => any {
return (x: any, method: string) =>
Promise.resolve(x)
.then((x) => x[method])
.then((x) =>
typeof x !== "function"
? x
: x({
...methodArgs,
effects,
}),
)
}
const hasId = object({ id: idType }).test
export class RpcListener {
unixSocketServer = net.createServer(async (server) => {})
private _system: System | undefined
private _effects: HostSystem | undefined
constructor(
readonly getDependencies: AllGetDependencies,
private callbacks = new CallbackHolder(),
) {
if (!fs.existsSync(SOCKET_PARENT)) {
fs.mkdirSync(SOCKET_PARENT, { recursive: true })
}
this.unixSocketServer.listen(SOCKET_PATH)
this.unixSocketServer.on("connection", (s) => {
let id: IdType = null
const captureId = <X>(x: X) => {
if (hasId(x)) id = x.id
return x
}
const logData =
(location: string) =>
<X>(x: X) => {
console.log({
location,
stringified: JSON.stringify(x),
type: typeof x,
id,
})
return x
}
const mapError = (error: any): SocketResponse => ({
jsonrpc,
id,
error: {
message: typeof error,
data: {
details: error?.message ?? String(error),
debug: error?.stack,
},
code: 0,
},
})
const writeDataToSocket = (x: SocketResponse) =>
new Promise((resolve) => s.write(JSON.stringify(x), resolve))
s.on("data", (a) =>
Promise.resolve(a)
.then(logData("dataIn"))
.then(jsonParse)
.then(captureId)
.then((x) => this.dealWithInput(x))
.catch(mapError)
.then(logData("response"))
.then(writeDataToSocket)
.finally(() => void s.end()),
)
})
}
private get effects() {
return this.getDependencies.hostSystem()(this.callbacks)
}
private get system() {
if (!this._system) throw new Error("System not initialized")
return this._system
}
private dealWithInput(input: unknown): MaybePromise<SocketResponse> {
return matches(input)
.when(some(runType, sandboxRunType), async ({ id, params }) => {
const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure)
return system
.execute(this.effects, {
procedure,
input: params.input,
timeout: params.timeout,
})
.then((result) =>
"ok" in result
? {
jsonrpc,
id,
result: result.ok === undefined ? null : result.ok,
}
: {
jsonrpc,
id,
error: {
code: result.err.code,
message: "Package Root Error",
data: { details: result.err.message },
},
},
)
.catch((error) => ({
jsonrpc,
id,
error: {
code: 0,
message: typeof error,
data: { details: "" + error, debug: error?.stack },
},
}))
})
.when(callbackType, async ({ id, params: { callback, args } }) =>
Promise.resolve(this.callbacks.callCallback(callback, args))
.then((result) => ({
jsonrpc,
id,
result,
}))
.catch((error) => ({
jsonrpc,
id,
error: {
code: 0,
message: typeof error,
data: {
details: error?.message ?? String(error),
debug: error?.stack,
},
},
})),
)
.when(exitType, async ({ id }) => {
if (this._system) this._system.exit(this.effects)
delete this._system
delete this._effects
return {
jsonrpc,
id,
result: null,
}
})
.when(initType, async ({ id }) => {
this._system = await this.getDependencies.system()
return {
jsonrpc,
id,
result: null,
}
})
.when(evalType, async ({ id, params }) => {
const result = await new Function(
`return (async () => { return (${params.script}) }).call(this)`,
).call({
listener: this,
require: require,
})
return {
jsonrpc,
id,
result: !["string", "number", "boolean", "null", "object"].includes(
typeof result,
)
? null
: result,
}
})
.when(shape({ id: idType, method: string }), ({ id, method }) => ({
jsonrpc,
id,
error: {
code: -32601,
message: `Method not found`,
data: {
details: method,
},
},
}))
.defaultToLazy(() => {
console.warn(
`Coudln't parse the following input ${JSON.stringify(input)}`,
)
return {
jsonrpc,
id: (input as any)?.id,
error: {
code: -32602,
message: "invalid params",
data: {
details: JSON.stringify(input),
},
},
}
})
}
}

View File

@@ -0,0 +1,76 @@
import * as fs from "fs/promises"
import * as cp from "child_process"
import { Overlay, types as T } from "@start9labs/start-sdk"
import { promisify } from "util"
import { DockerProcedure, VolumeId } from "../../../Models/DockerProcedure"
import { Volume } from "./matchVolume"
export const exec = promisify(cp.exec)
export const execFile = promisify(cp.execFile)
export class DockerProcedureContainer {
private constructor(readonly overlay: Overlay) {}
// static async readonlyOf(data: DockerProcedure) {
// return DockerProcedureContainer.of(data, ["-o", "ro"])
// }
static async of(
effects: T.Effects,
data: DockerProcedure,
volumes: { [id: VolumeId]: Volume },
) {
const overlay = await Overlay.of(effects, data.image)
if (data.mounts) {
const mounts = data.mounts
for (const mount in mounts) {
const path = mounts[mount].startsWith("/")
? `${overlay.rootfs}${mounts[mount]}`
: `${overlay.rootfs}/${mounts[mount]}`
await fs.mkdir(path, { recursive: true })
const volumeMount = volumes[mount]
if (volumeMount.type === "data") {
await overlay.mount({ type: "volume", id: mount }, mounts[mount])
} else if (volumeMount.type === "assets") {
await overlay.mount({ type: "assets", id: mount }, mounts[mount])
} else if (volumeMount.type === "certificate") {
volumeMount
const certChain = await effects.getSslCertificate()
const key = await effects.getSslKey()
await fs.writeFile(
`${path}/${volumeMount["interface-id"]}.cert.pem`,
certChain.join("\n"),
)
await fs.writeFile(
`${path}/${volumeMount["interface-id"]}.key.pem`,
key,
)
} else if (volumeMount.type === "pointer") {
await effects.mount({
location: path,
target: {
packageId: volumeMount["package-id"],
path: volumeMount.path,
readonly: volumeMount.readonly,
volumeId: volumeMount["volume-id"],
},
})
} else if (volumeMount.type === "backup") {
throw new Error("TODO")
}
}
}
return new DockerProcedureContainer(overlay)
}
async exec(commands: string[]) {
try {
return await this.overlay.exec(commands)
} finally {
await this.overlay.destroy()
}
}
async spawn(commands: string[]): Promise<cp.ChildProcessWithoutNullStreams> {
return await this.overlay.spawn(commands)
}
}

View File

@@ -0,0 +1,150 @@
import { PolyfillEffects } from "./polyfillEffects"
import { DockerProcedureContainer } from "./DockerProcedureContainer"
import { SystemForEmbassy } from "."
import { HostSystemStartOs } from "../../HostSystemStartOs"
import { util, Daemons, types as T } from "@start9labs/start-sdk"
const EMBASSY_HEALTH_INTERVAL = 15 * 1000
const EMBASSY_PROPERTIES_LOOP = 30 * 1000
/**
* We wanted something to represent what the main loop is doing, and
* in this case it used to run the properties, health, and the docker/ js main.
* Also, this has an ability to clean itself up too if need be.
*/
export class MainLoop {
private healthLoops:
| {
name: string
interval: NodeJS.Timeout
}[]
| undefined
private mainEvent:
| Promise<{
daemon: T.DaemonReturned
wait: Promise<unknown>
}>
| undefined
private propertiesEvent: NodeJS.Timeout | undefined
constructor(
readonly system: SystemForEmbassy,
readonly effects: HostSystemStartOs,
readonly runProperties: () => Promise<void>,
) {
this.healthLoops = this.constructHealthLoops()
this.mainEvent = this.constructMainEvent()
this.propertiesEvent = this.constructPropertiesEvent()
}
private async constructMainEvent() {
const { system, effects } = this
const utils = util.createUtils(effects)
const currentCommand: [string, ...string[]] = [
system.manifest.main.entrypoint,
...system.manifest.main.args,
]
await effects.setMainStatus({ status: "running" })
const jsMain = (this.system.moduleCode as any)?.jsMain
const dockerProcedureContainer = await DockerProcedureContainer.of(
effects,
this.system.manifest.main,
this.system.manifest.volumes,
)
if (jsMain) {
const daemons = Daemons.of({
effects,
started: async (_) => {},
healthReceipts: [],
})
throw new Error("todo")
// return {
// daemon,
// wait: daemon.wait().finally(() => {
// this.clean()
// effects.setMainStatus({ status: "stopped" })
// }),
// }
}
const daemon = await utils.runDaemon(
this.system.manifest.main.image,
currentCommand,
{
overlay: dockerProcedureContainer.overlay,
},
)
return {
daemon,
wait: daemon.wait().finally(() => {
this.clean()
effects
.setMainStatus({ status: "stopped" })
.catch((e) => console.error("Could not set the status to stopped"))
}),
}
}
public async clean(options?: { timeout?: number }) {
const { mainEvent, healthLoops, propertiesEvent } = this
delete this.mainEvent
delete this.healthLoops
delete this.propertiesEvent
if (mainEvent) await (await mainEvent).daemon.term()
clearInterval(propertiesEvent)
if (healthLoops) healthLoops.forEach((x) => clearInterval(x.interval))
}
private constructPropertiesEvent() {
const { runProperties } = this
return setInterval(() => {
runProperties()
}, EMBASSY_PROPERTIES_LOOP)
}
private constructHealthLoops() {
const { manifest } = this.system
const effects = this.effects
const start = Date.now()
return Object.values(manifest["health-checks"]).map((value) => {
const name = value.name
const interval = setInterval(async () => {
const actionProcedure = value
const timeChanged = Date.now() - start
if (actionProcedure.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
actionProcedure,
manifest.volumes,
)
const executed = await container.exec([
actionProcedure.entrypoint,
...actionProcedure.args,
JSON.stringify(timeChanged),
])
const stderr = executed.stderr.toString()
if (stderr)
console.error(`Error running health check ${value.name}: ${stderr}`)
return executed.stdout.toString()
} else {
const moduleCode = await this.system.moduleCode
const method = moduleCode.health?.[value.name]
if (!method)
return console.error(
`Expecting that thejs health check ${value.name} exists`,
)
return (await method(
new PolyfillEffects(effects, this.system.manifest),
timeChanged,
).then((x) => {
if ("result" in x) return x.result
if ("error" in x)
return console.error("Error getting config: " + x.error)
return console.error("Error getting config: " + x["error-code"][1])
})) as any
}
}, EMBASSY_HEALTH_INTERVAL)
return { name, interval }
})
}
}

View File

@@ -0,0 +1,900 @@
import { types as T, util, EmVer } from "@start9labs/start-sdk"
import * as fs from "fs/promises"
import { PolyfillEffects } from "./polyfillEffects"
import { ExecuteResult, System } from "../../../Interfaces/System"
import { matchManifest, Manifest, Procedure } from "./matchManifest"
import { create } from "domain"
import * as childProcess from "node:child_process"
import { Volume } from "../../../Models/Volume"
import { DockerProcedure } from "../../../Models/DockerProcedure"
import { DockerProcedureContainer } from "./DockerProcedureContainer"
import { promisify } from "node:util"
import * as U from "./oldEmbassyTypes"
import { MainLoop } from "./MainLoop"
import {
matches,
boolean,
dictionary,
literal,
literals,
object,
string,
unknown,
any,
tuple,
number,
} from "ts-matches"
import { HostSystemStartOs } from "../../HostSystemStartOs"
import { JsonPath, unNestPath } from "../../../Models/JsonPath"
import { HostSystem } from "../../../Interfaces/HostSystem"
type Optional<A> = A | undefined | null
function todo(): never {
throw new Error("Not implemented")
}
const execFile = promisify(childProcess.execFile)
const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json"
const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js"
const EMBASSY_POINTER_PATH_PREFIX = "/embassyConfig"
export class SystemForEmbassy implements System {
currentRunning: MainLoop | undefined
static async of(manifestLocation: string = MANIFEST_LOCATION) {
const moduleCode = await import(EMBASSY_JS_LOCATION)
.catch((_) => require(EMBASSY_JS_LOCATION))
.catch(async (_) => {
console.error("Could not load the js")
console.error({
exists: await fs.stat(EMBASSY_JS_LOCATION),
})
return {}
})
const manifestData = await fs.readFile(manifestLocation, "utf-8")
return new SystemForEmbassy(
matchManifest.unsafeCast(JSON.parse(manifestData)),
moduleCode,
)
}
constructor(
readonly manifest: Manifest,
readonly moduleCode: Partial<U.ExpectedExports>,
) {}
async execute(
effects: HostSystemStartOs,
options: {
procedure: JsonPath
input: unknown
timeout?: number | undefined
},
): Promise<ExecuteResult> {
return this._execute(effects, options)
.then((x) =>
matches(x)
.when(
object({
result: any,
}),
(x) => ({
ok: x.result,
}),
)
.when(
object({
error: string,
}),
(x) => ({
err: {
code: 0,
message: x.error,
},
}),
)
.when(
object({
"error-code": tuple(number, string),
}),
({ "error-code": [code, message] }) => ({
err: {
code,
message,
},
}),
)
.defaultTo({ ok: x }),
)
.catch((error) => ({
err: {
code: 0,
message: "" + error,
},
}))
}
async exit(effects: HostSystemStartOs): Promise<void> {
if (this.currentRunning) await this.currentRunning.clean()
delete this.currentRunning
}
async _execute(
effects: HostSystemStartOs,
options: {
procedure: JsonPath
input: unknown
timeout?: number | undefined
},
): Promise<unknown> {
const input = options.input
switch (options.procedure) {
case "/backup/create":
return this.createBackup(effects)
case "/backup/restore":
return this.restoreBackup(effects)
case "/config/get":
return this.getConfig(effects)
case "/config/set":
return this.setConfig(effects, input)
case "/actions/metadata":
return todo()
case "/init":
return this.init(effects, string.optional().unsafeCast(input))
case "/uninit":
return this.uninit(effects, string.optional().unsafeCast(input))
case "/main/start":
return this.mainStart(effects)
case "/main/stop":
return this.mainStop(effects)
default:
const procedures = unNestPath(options.procedure)
switch (true) {
case procedures[1] === "actions" && procedures[3] === "get":
return this.action(effects, procedures[2], input)
case procedures[1] === "actions" && procedures[3] === "run":
return this.action(effects, procedures[2], input)
case procedures[1] === "dependencies" && procedures[3] === "query":
return this.dependenciesAutoconfig(effects, procedures[2], input)
case procedures[1] === "dependencies" && procedures[3] === "update":
return this.dependenciesAutoconfig(effects, procedures[2], input)
}
}
}
private async init(
effects: HostSystemStartOs,
previousVersion: Optional<string>,
): Promise<void> {
console.log("here1")
if (previousVersion) await this.migration(effects, previousVersion)
console.log("here2")
await this.properties(effects)
console.log("here3")
await effects.setMainStatus({ status: "stopped" })
console.log("here4")
}
private async uninit(
effects: HostSystemStartOs,
nextVersion: Optional<string>,
): Promise<void> {
// TODO Do a migration down if the version exists
await effects.setMainStatus({ status: "stopped" })
}
private async mainStart(effects: HostSystemStartOs): Promise<void> {
if (!!this.currentRunning) return
this.currentRunning = new MainLoop(this, effects, () =>
this.properties(effects),
)
}
private async mainStop(
effects: HostSystemStartOs,
options?: { timeout?: number },
): Promise<void> {
const { currentRunning } = this
delete this.currentRunning
if (currentRunning) {
await currentRunning.clean({
timeout: options?.timeout || this.manifest.main["sigterm-timeout"],
})
}
}
private async createBackup(effects: HostSystemStartOs): Promise<void> {
const backup = this.manifest.backup.create
if (backup.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
backup,
this.manifest.volumes,
)
await container.exec([backup.entrypoint, ...backup.args])
} else {
const moduleCode = await this.moduleCode
await moduleCode.createBackup?.(
new PolyfillEffects(effects, this.manifest),
)
}
}
private async restoreBackup(effects: HostSystemStartOs): Promise<void> {
const restoreBackup = this.manifest.backup.restore
if (restoreBackup.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
restoreBackup,
this.manifest.volumes,
)
await container.exec([restoreBackup.entrypoint, ...restoreBackup.args])
} else {
const moduleCode = await this.moduleCode
await moduleCode.restoreBackup?.(
new PolyfillEffects(effects, this.manifest),
)
}
}
private async getConfig(effects: HostSystemStartOs): Promise<T.ConfigRes> {
return this.getConfigUncleaned(effects).then(removePointers)
}
private async getConfigUncleaned(
effects: HostSystemStartOs,
): Promise<T.ConfigRes> {
const config = this.manifest.config?.get
if (!config) return { spec: {} }
if (config.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
config,
this.manifest.volumes,
)
// TODO: yaml
return JSON.parse(
(
await container.exec([config.entrypoint, ...config.args])
).stdout.toString(),
)
} else {
const moduleCode = await this.moduleCode
const method = moduleCode.getConfig
if (!method) throw new Error("Expecting that the method getConfig exists")
return (await method(new PolyfillEffects(effects, this.manifest)).then(
(x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
},
)) as any
}
}
private async setConfig(
effects: HostSystemStartOs,
newConfigWithoutPointers: unknown,
): Promise<T.SetResult> {
const newConfig = structuredClone(newConfigWithoutPointers)
await updateConfig(
effects,
await this.getConfigUncleaned(effects).then((x) => x.spec),
newConfig,
)
const setConfigValue = this.manifest.config?.set
if (!setConfigValue) return { signal: "SIGTERM", "depends-on": {} }
if (setConfigValue.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
setConfigValue,
this.manifest.volumes,
)
return JSON.parse(
(
await container.exec([
setConfigValue.entrypoint,
...setConfigValue.args,
JSON.stringify(newConfig),
])
).stdout.toString(),
)
} else if (setConfigValue.type === "script") {
const moduleCode = await this.moduleCode
const method = moduleCode.setConfig
if (!method) throw new Error("Expecting that the method setConfig exists")
return await method(
new PolyfillEffects(effects, this.manifest),
newConfig as U.Config,
).then((x): T.SetResult => {
if ("result" in x)
return {
"depends-on": x.result["depends-on"],
signal: x.result.signal === "SIGEMT" ? "SIGTERM" : x.result.signal,
}
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})
} else {
return {
"depends-on": {},
signal: "SIGTERM",
}
}
}
private async migration(
effects: HostSystemStartOs,
fromVersion: string,
): Promise<T.MigrationRes> {
const fromEmver = EmVer.from(fromVersion)
const currentEmver = EmVer.from(this.manifest.version)
if (!this.manifest.migrations) return { configured: true }
const fromMigration = Object.entries(this.manifest.migrations.from)
.map(([version, procedure]) => [EmVer.from(version), procedure] as const)
.find(
([versionEmver, procedure]) =>
versionEmver.greaterThan(fromEmver) &&
versionEmver.lessThanOrEqual(currentEmver),
)
const toMigration = Object.entries(this.manifest.migrations.to)
.map(([version, procedure]) => [EmVer.from(version), procedure] as const)
.find(
([versionEmver, procedure]) =>
versionEmver.greaterThan(fromEmver) &&
versionEmver.lessThanOrEqual(currentEmver),
)
// prettier-ignore
const migration = (
fromEmver.greaterThan(currentEmver) ? [toMigration, fromMigration] :
[fromMigration, toMigration]).filter(Boolean)[0]
if (migration) {
const [version, procedure] = migration
if (procedure.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
procedure,
this.manifest.volumes,
)
return JSON.parse(
(
await container.exec([
procedure.entrypoint,
...procedure.args,
JSON.stringify(fromVersion),
])
).stdout.toString(),
)
} else if (procedure.type === "script") {
const moduleCode = await this.moduleCode
const method = moduleCode.migration
if (!method)
throw new Error("Expecting that the method migration exists")
return (await method(
new PolyfillEffects(effects, this.manifest),
fromVersion as string,
).then((x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})) as any
}
}
return { configured: true }
}
private async properties(effects: HostSystemStartOs): Promise<undefined> {
// TODO BLU-J set the properties ever so often
const setConfigValue = this.manifest.properties
if (!setConfigValue) return
if (setConfigValue.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
setConfigValue,
this.manifest.volumes,
)
return JSON.parse(
(
await container.exec([
setConfigValue.entrypoint,
...setConfigValue.args,
])
).stdout.toString(),
)
} else if (setConfigValue.type === "script") {
const moduleCode = this.moduleCode
const method = moduleCode.properties
if (!method)
throw new Error("Expecting that the method properties exists")
await method(new PolyfillEffects(effects, this.manifest)).then((x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})
}
}
private async health(
effects: HostSystemStartOs,
healthId: string,
timeSinceStarted: unknown,
): Promise<void> {
const healthProcedure = this.manifest["health-checks"][healthId]
if (!healthProcedure) return
if (healthProcedure.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
healthProcedure,
this.manifest.volumes,
)
return JSON.parse(
(
await container.exec([
healthProcedure.entrypoint,
...healthProcedure.args,
JSON.stringify(timeSinceStarted),
])
).stdout.toString(),
)
} else if (healthProcedure.type === "script") {
const moduleCode = await this.moduleCode
const method = moduleCode.health?.[healthId]
if (!method) throw new Error("Expecting that the method health exists")
await method(
new PolyfillEffects(effects, this.manifest),
Number(timeSinceStarted),
).then((x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})
}
}
private async action(
effects: HostSystemStartOs,
actionId: string,
formData: unknown,
): Promise<T.ActionResult> {
const actionProcedure = this.manifest.actions?.[actionId]?.implementation
if (!actionProcedure) return { message: "Action not found", value: null }
if (actionProcedure.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
actionProcedure,
this.manifest.volumes,
)
return JSON.parse(
(
await container.exec([
actionProcedure.entrypoint,
...actionProcedure.args,
JSON.stringify(formData),
])
).stdout.toString(),
)
} else {
const moduleCode = await this.moduleCode
const method = moduleCode.action?.[actionId]
if (!method) throw new Error("Expecting that the method action exists")
return (await method(
new PolyfillEffects(effects, this.manifest),
formData as any,
).then((x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})) as any
}
}
private async dependenciesCheck(
effects: HostSystemStartOs,
id: string,
oldConfig: unknown,
): Promise<object> {
const actionProcedure = this.manifest.dependencies?.[id]?.config?.check
if (!actionProcedure) return { message: "Action not found", value: null }
if (actionProcedure.type === "docker") {
const container = await DockerProcedureContainer.of(
effects,
actionProcedure,
this.manifest.volumes,
)
return JSON.parse(
(
await container.exec([
actionProcedure.entrypoint,
...actionProcedure.args,
JSON.stringify(oldConfig),
])
).stdout.toString(),
)
} else if (actionProcedure.type === "script") {
const moduleCode = await this.moduleCode
const method = moduleCode.dependencies?.[id]?.check
if (!method)
throw new Error(
`Expecting that the method dependency check ${id} exists`,
)
return (await method(
new PolyfillEffects(effects, this.manifest),
oldConfig as any,
).then((x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})) as any
} else {
return {}
}
}
private async dependenciesAutoconfig(
effects: HostSystemStartOs,
id: string,
oldConfig: unknown,
): Promise<void> {
const moduleCode = await this.moduleCode
const method = moduleCode.dependencies?.[id]?.autoConfigure
if (!method)
throw new Error(
`Expecting that the method dependency autoConfigure ${id} exists`,
)
return (await method(
new PolyfillEffects(effects, this.manifest),
oldConfig as any,
).then((x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})) as any
}
// private async sandbox(
// effects: HostSystemStartOs,
// options: {
// procedure:
// | "/createBackup"
// | "/restoreBackup"
// | "/getConfig"
// | "/setConfig"
// | "migration"
// | "/properties"
// | `/action/${string}`
// | `/dependencies/${string}/check`
// | `/dependencies/${string}/autoConfigure`
// input: unknown
// timeout?: number | undefined
// },
// ): Promise<unknown> {
// const input = options.input
// switch (options.procedure) {
// case "/createBackup":
// return this.roCreateBackup(effects)
// case "/restoreBackup":
// return this.roRestoreBackup(effects)
// case "/getConfig":
// return this.roGetConfig(effects)
// case "/setConfig":
// return this.roSetConfig(effects, input)
// case "migration":
// return this.roMigration(effects, input)
// case "/properties":
// return this.roProperties(effects)
// default:
// const procedure = options.procedure.split("/")
// switch (true) {
// case options.procedure.startsWith("/action/"):
// return this.roAction(effects, procedure[2], input)
// case options.procedure.startsWith("/dependencies/") &&
// procedure[3] === "check":
// return this.roDependenciesCheck(effects, procedure[2], input)
// case options.procedure.startsWith("/dependencies/") &&
// procedure[3] === "autoConfigure":
// return this.roDependenciesAutoconfig(effects, procedure[2], input)
// }
// }
// }
// private async roCreateBackup(effects: HostSystemStartOs): Promise<void> {
// const backup = this.manifest.backup.create
// if (backup.type === "docker") {
// const container = await DockerProcedureContainer.readonlyOf(backup)
// await container.exec([backup.entrypoint, ...backup.args])
// } else {
// const moduleCode = await this.moduleCode
// await moduleCode.createBackup?.(new PolyfillEffects(effects))
// }
// }
// private async roRestoreBackup(effects: HostSystemStartOs): Promise<void> {
// const restoreBackup = this.manifest.backup.restore
// if (restoreBackup.type === "docker") {
// const container = await DockerProcedureContainer.readonlyOf(restoreBackup)
// await container.exec([restoreBackup.entrypoint, ...restoreBackup.args])
// } else {
// const moduleCode = await this.moduleCode
// await moduleCode.restoreBackup?.(new PolyfillEffects(effects))
// }
// }
// private async roGetConfig(effects: HostSystemStartOs): Promise<T.ConfigRes> {
// const config = this.manifest.config?.get
// if (!config) return { spec: {} }
// if (config.type === "docker") {
// const container = await DockerProcedureContainer.readonlyOf(config)
// return JSON.parse(
// (await container.exec([config.entrypoint, ...config.args])).stdout,
// )
// } else {
// const moduleCode = await this.moduleCode
// const method = moduleCode.getConfig
// if (!method) throw new Error("Expecting that the method getConfig exists")
// return (await method(new PolyfillEffects(effects)).then((x) => {
// if ("result" in x) return x.result
// if ("error" in x) throw new Error("Error getting config: " + x.error)
// throw new Error("Error getting config: " + x["error-code"][1])
// })) as any
// }
// }
// private async roSetConfig(
// effects: HostSystemStartOs,
// newConfig: unknown,
// ): Promise<T.SetResult> {
// const setConfigValue = this.manifest.config?.set
// if (!setConfigValue) return { signal: "SIGTERM", "depends-on": {} }
// if (setConfigValue.type === "docker") {
// const container = await DockerProcedureContainer.readonlyOf(
// setConfigValue,
// )
// return JSON.parse(
// (
// await container.exec([
// setConfigValue.entrypoint,
// ...setConfigValue.args,
// JSON.stringify(newConfig),
// ])
// ).stdout,
// )
// } else {
// const moduleCode = await this.moduleCode
// const method = moduleCode.setConfig
// if (!method) throw new Error("Expecting that the method setConfig exists")
// return await method(
// new PolyfillEffects(effects),
// newConfig as U.Config,
// ).then((x) => {
// if ("result" in x) return x.result
// if ("error" in x) throw new Error("Error getting config: " + x.error)
// throw new Error("Error getting config: " + x["error-code"][1])
// })
// }
// }
// private async roMigration(
// effects: HostSystemStartOs,
// fromVersion: unknown,
// ): Promise<T.MigrationRes> {
// throw new Error("Migrations should never be ran in the sandbox mode")
// }
// private async roProperties(effects: HostSystemStartOs): Promise<unknown> {
// const setConfigValue = this.manifest.properties
// if (!setConfigValue) return {}
// if (setConfigValue.type === "docker") {
// const container = await DockerProcedureContainer.readonlyOf(
// setConfigValue,
// )
// return JSON.parse(
// (
// await container.exec([
// setConfigValue.entrypoint,
// ...setConfigValue.args,
// ])
// ).stdout,
// )
// } else {
// const moduleCode = await this.moduleCode
// const method = moduleCode.properties
// if (!method)
// throw new Error("Expecting that the method properties exists")
// return await method(new PolyfillEffects(effects)).then((x) => {
// if ("result" in x) return x.result
// if ("error" in x) throw new Error("Error getting config: " + x.error)
// throw new Error("Error getting config: " + x["error-code"][1])
// })
// }
// }
// private async roHealth(
// effects: HostSystemStartOs,
// healthId: string,
// timeSinceStarted: unknown,
// ): Promise<void> {
// const healthProcedure = this.manifest["health-checks"][healthId]
// if (!healthProcedure) return
// if (healthProcedure.type === "docker") {
// const container = await DockerProcedureContainer.readonlyOf(
// healthProcedure,
// )
// return JSON.parse(
// (
// await container.exec([
// healthProcedure.entrypoint,
// ...healthProcedure.args,
// JSON.stringify(timeSinceStarted),
// ])
// ).stdout,
// )
// } else {
// const moduleCode = await this.moduleCode
// const method = moduleCode.health?.[healthId]
// if (!method) throw new Error("Expecting that the method health exists")
// await method(new PolyfillEffects(effects), Number(timeSinceStarted)).then(
// (x) => {
// if ("result" in x) return x.result
// if ("error" in x) throw new Error("Error getting config: " + x.error)
// throw new Error("Error getting config: " + x["error-code"][1])
// },
// )
// }
// }
// private async roAction(
// effects: HostSystemStartOs,
// actionId: string,
// formData: unknown,
// ): Promise<T.ActionResult> {
// const actionProcedure = this.manifest.actions?.[actionId]?.implementation
// if (!actionProcedure) return { message: "Action not found", value: null }
// if (actionProcedure.type === "docker") {
// const container = await DockerProcedureContainer.readonlyOf(
// actionProcedure,
// )
// return JSON.parse(
// (
// await container.exec([
// actionProcedure.entrypoint,
// ...actionProcedure.args,
// JSON.stringify(formData),
// ])
// ).stdout,
// )
// } else {
// const moduleCode = await this.moduleCode
// const method = moduleCode.action?.[actionId]
// if (!method) throw new Error("Expecting that the method action exists")
// return (await method(new PolyfillEffects(effects), formData as any).then(
// (x) => {
// if ("result" in x) return x.result
// if ("error" in x) throw new Error("Error getting config: " + x.error)
// throw new Error("Error getting config: " + x["error-code"][1])
// },
// )) as any
// }
// }
// private async roDependenciesCheck(
// effects: HostSystemStartOs,
// id: string,
// oldConfig: unknown,
// ): Promise<object> {
// const actionProcedure = this.manifest.dependencies?.[id]?.config?.check
// if (!actionProcedure) return { message: "Action not found", value: null }
// if (actionProcedure.type === "docker") {
// const container = await DockerProcedureContainer.readonlyOf(
// actionProcedure,
// )
// return JSON.parse(
// (
// await container.exec([
// actionProcedure.entrypoint,
// ...actionProcedure.args,
// JSON.stringify(oldConfig),
// ])
// ).stdout,
// )
// } else {
// const moduleCode = await this.moduleCode
// const method = moduleCode.dependencies?.[id]?.check
// if (!method)
// throw new Error(
// `Expecting that the method dependency check ${id} exists`,
// )
// return (await method(new PolyfillEffects(effects), oldConfig as any).then(
// (x) => {
// if ("result" in x) return x.result
// if ("error" in x) throw new Error("Error getting config: " + x.error)
// throw new Error("Error getting config: " + x["error-code"][1])
// },
// )) as any
// }
// }
// private async roDependenciesAutoconfig(
// effects: HostSystemStartOs,
// id: string,
// oldConfig: unknown,
// ): Promise<void> {
// const moduleCode = await this.moduleCode
// const method = moduleCode.dependencies?.[id]?.autoConfigure
// if (!method)
// throw new Error(
// `Expecting that the method dependency autoConfigure ${id} exists`,
// )
// return (await method(new PolyfillEffects(effects), oldConfig as any).then(
// (x) => {
// if ("result" in x) return x.result
// if ("error" in x) throw new Error("Error getting config: " + x.error)
// throw new Error("Error getting config: " + x["error-code"][1])
// },
// )) as any
// }
}
async function removePointers(value: T.ConfigRes): Promise<T.ConfigRes> {
const startingSpec = structuredClone(value.spec)
const spec = cleanSpecOfPointers(startingSpec)
return { ...value, spec }
}
const matchPointer = object({
type: literal("pointer"),
})
const matchPointerPackage = object({
subtype: literal("package"),
target: literals("tor-key", "tor-address", "lan-address"),
"package-id": string,
interface: string,
})
const matchPointerConfig = object({
subtype: literal("package"),
target: literals("config"),
"package-id": string,
selector: string,
multi: boolean,
})
const matchSpec = object({
spec: object,
})
const matchVariants = object({ variants: dictionary([string, unknown]) })
function cleanSpecOfPointers<T>(mutSpec: T): T {
if (!object.test(mutSpec)) return mutSpec
for (const key in mutSpec) {
const value = mutSpec[key]
if (matchSpec.test(value)) value.spec = cleanSpecOfPointers(value.spec)
if (matchVariants.test(value))
value.variants = Object.fromEntries(
Object.entries(value.variants).map(([key, value]) => [
key,
cleanSpecOfPointers(value),
]),
)
if (!matchPointer.test(value)) continue
delete mutSpec[key]
// // if (value.target === )
}
return mutSpec
}
async function updateConfig(
effects: HostSystemStartOs,
spec: unknown,
mutConfigValue: unknown,
) {
if (!dictionary([string, unknown]).test(spec)) return
if (!dictionary([string, unknown]).test(mutConfigValue)) return
for (const key in spec) {
const specValue = spec[key]
const newConfigValue = mutConfigValue[key]
if (matchSpec.test(specValue)) {
const updateObject = { spec: null }
await updateConfig(effects, { spec: specValue.spec }, updateObject)
mutConfigValue[key] = updateObject.spec
}
if (
matchVariants.test(specValue) &&
object({ tag: object({ id: string }) }).test(newConfigValue) &&
newConfigValue.tag.id in specValue.variants
) {
// Not going to do anything on the variants...
}
if (!matchPointer.test(specValue)) continue
if (matchPointerConfig.test(specValue)) {
const configValue = (await effects.store.get({
packageId: specValue["package-id"],
callback() {},
path: `${EMBASSY_POINTER_PATH_PREFIX}${specValue.selector}` as any,
})) as any
mutConfigValue[key] = configValue
}
if (matchPointerPackage.test(specValue)) {
mutConfigValue[key] = await effects.embassyGetInterface({
target: specValue.target,
packageId: specValue["package-id"],
interface: specValue["interface"],
})
}
}
}

View File

@@ -0,0 +1,119 @@
import {
object,
literal,
string,
array,
boolean,
dictionary,
literals,
number,
unknown,
some,
every,
} from "ts-matches"
import { matchVolume } from "./matchVolume"
import { matchDockerProcedure } from "../../../Models/DockerProcedure"
const matchJsProcedure = object(
{
type: literal("script"),
args: array(unknown),
},
["args"],
{
args: [],
},
)
const matchProcedure = some(matchDockerProcedure, matchJsProcedure)
export type Procedure = typeof matchProcedure._TYPE
const matchAction = object(
{
name: string,
description: string,
warning: string,
implementation: matchProcedure,
"allowed-statuses": array(literals("running", "stopped")),
"input-spec": unknown,
},
["warning", "input-spec", "input-spec"],
)
export const matchManifest = object(
{
id: string,
version: string,
main: matchDockerProcedure,
assets: object(
{
assets: string,
scripts: string,
},
["assets", "scripts"],
),
"health-checks": dictionary([
string,
every(
matchProcedure,
object({
name: string,
}),
),
]),
config: object({
get: matchProcedure,
set: matchProcedure,
}),
properties: matchProcedure,
volumes: dictionary([string, matchVolume]),
interfaces: dictionary([
string,
object({
name: string,
"tor-config": object({}),
"lan-config": object({}),
ui: boolean,
protocols: array(string),
}),
]),
backup: object({
create: matchProcedure,
restore: matchProcedure,
}),
migrations: object({
to: dictionary([string, matchProcedure]),
from: dictionary([string, matchProcedure]),
}),
dependencies: dictionary([
string,
object(
{
version: string,
requirement: some(
object({
type: literal("opt-in"),
how: string,
}),
object({
type: literal("opt-out"),
how: string,
}),
object({
type: literal("required"),
}),
),
description: string,
config: object({
check: matchProcedure,
"auto-configure": matchProcedure,
}),
},
["description", "config"],
),
]),
actions: dictionary([string, matchAction]),
},
["config", "actions", "properties", "migrations", "dependencies"],
)
export type Manifest = typeof matchManifest._TYPE

View File

@@ -0,0 +1,35 @@
import { object, literal, string, boolean, some } from "ts-matches"
const matchDataVolume = object(
{
type: literal("data"),
readonly: boolean,
},
["readonly"],
)
const matchAssetVolume = object({
type: literal("assets"),
})
const matchPointerVolume = object({
type: literal("pointer"),
"package-id": string,
"volume-id": string,
path: string,
readonly: boolean,
})
const matchCertificateVolume = object({
type: literal("certificate"),
"interface-id": string,
})
const matchBackupVolume = object({
type: literal("backup"),
readonly: boolean,
})
export const matchVolume = some(
matchDataVolume,
matchAssetVolume,
matchPointerVolume,
matchCertificateVolume,
matchBackupVolume,
)
export type Volume = typeof matchVolume._TYPE

View File

@@ -0,0 +1,482 @@
// deno-lint-ignore no-namespace
export type ExpectedExports = {
version: 2
/** Set configuration is called after we have modified and saved the configuration in the embassy ui. Use this to make a file for the docker to read from for configuration. */
setConfig: (effects: Effects, input: Config) => Promise<ResultType<SetResult>>
/** Get configuration returns a shape that describes the format that the embassy ui will generate, and later send to the set config */
getConfig: (effects: Effects) => Promise<ResultType<ConfigRes>>
/** These are how we make sure the our dependency configurations are valid and if not how to fix them. */
dependencies: Dependencies
/** For backing up service data though the embassyOS UI */
createBackup: (effects: Effects) => Promise<ResultType<unknown>>
/** For restoring service data that was previously backed up using the embassyOS UI create backup flow. Backup restores are also triggered via the embassyOS UI, or doing a system restore flow during setup. */
restoreBackup: (effects: Effects) => Promise<ResultType<unknown>>
/** Properties are used to get values from the docker, like a username + password, what ports we are hosting from */
properties: (effects: Effects) => Promise<ResultType<Properties>>
health: {
/** Should be the health check id */
[id: string]: (
effects: Effects,
dateMs: number,
) => Promise<ResultType<unknown>>
}
migration: (
effects: Effects,
version: string,
...args: unknown[]
) => Promise<ResultType<MigrationRes>>
action: {
[id: string]: (
effects: Effects,
config?: Config,
) => Promise<ResultType<ActionResult>>
}
/**
* This is the entrypoint for the main container. Used to start up something like the service that the
* package represents, like running a bitcoind in a bitcoind-wrapper.
*/
main: (effects: Effects) => Promise<ResultType<unknown>>
}
/** Used to reach out from the pure js runtime */
export type Effects = {
/** Usable when not sandboxed */
writeFile(input: {
path: string
volumeId: string
toWrite: string
}): Promise<void>
readFile(input: { volumeId: string; path: string }): Promise<string>
metadata(input: { volumeId: string; path: string }): Promise<Metadata>
/** Create a directory. Usable when not sandboxed */
createDir(input: { volumeId: string; path: string }): Promise<string>
readDir(input: { volumeId: string; path: string }): Promise<string[]>
/** Remove a directory. Usable when not sandboxed */
removeDir(input: { volumeId: string; path: string }): Promise<string>
removeFile(input: { volumeId: string; path: string }): Promise<void>
/** Write a json file into an object. Usable when not sandboxed */
writeJsonFile(input: {
volumeId: string
path: string
toWrite: Record<string, unknown>
}): Promise<void>
/** Read a json file into an object */
readJsonFile(input: {
volumeId: string
path: string
}): Promise<Record<string, unknown>>
runCommand(input: {
command: string
args?: string[]
timeoutMillis?: number
}): Promise<ResultType<string>>
runDaemon(input: { command: string; args?: string[] }): {
wait(): Promise<ResultType<string>>
term(): Promise<void>
}
chown(input: { volumeId: string; path: string; uid: string }): Promise<null>
chmod(input: { volumeId: string; path: string; mode: string }): Promise<null>
sleep(timeMs: number): Promise<null>
/** Log at the trace level */
trace(whatToPrint: string): void
/** Log at the warn level */
warn(whatToPrint: string): void
/** Log at the error level */
error(whatToPrint: string): void
/** Log at the debug level */
debug(whatToPrint: string): void
/** Log at the info level */
info(whatToPrint: string): void
/** Sandbox mode lets us read but not write */
is_sandboxed(): boolean
exists(input: { volumeId: string; path: string }): Promise<boolean>
bindLocal(options: {
internalPort: number
name: string
externalPort: number
}): Promise<string>
bindTor(options: {
internalPort: number
name: string
externalPort: number
}): Promise<string>
fetch(
url: string,
options?: {
method?: "GET" | "POST" | "PUT" | "DELETE" | "HEAD" | "PATCH"
headers?: Record<string, string>
body?: string
},
): Promise<{
method: string
ok: boolean
status: number
headers: Record<string, string>
body?: string | null
/// Returns the body as a string
text(): Promise<string>
/// Returns the body as a json
json(): Promise<unknown>
}>
runRsync(options: {
srcVolume: string
dstVolume: string
srcPath: string
dstPath: string
// rsync options: https://linux.die.net/man/1/rsync
options: BackupOptions
}): {
id: () => Promise<string>
wait: () => Promise<null>
progress: () => Promise<number>
}
}
// rsync options: https://linux.die.net/man/1/rsync
export type BackupOptions = {
delete: boolean
force: boolean
ignoreExisting: boolean
exclude: string[]
}
export type Metadata = {
fileType: string
isDir: boolean
isFile: boolean
isSymlink: boolean
len: number
modified?: Date
accessed?: Date
created?: Date
readonly: boolean
uid: number
gid: number
mode: number
}
export type MigrationRes = {
configured: boolean
}
export type ActionResult = {
version: "0"
message: string
value?: string
copyable: boolean
qr: boolean
}
export type ConfigRes = {
/** This should be the previous config, that way during set config we start with the previous */
config?: Config
/** Shape that is describing the form in the ui */
spec: ConfigSpec
}
export type Config = {
[propertyName: string]: unknown
}
export type ConfigSpec = {
/** Given a config value, define what it should render with the following spec */
[configValue: string]: ValueSpecAny
}
export type WithDefault<T, Default> = T & {
default: Default
}
export type WithNullableDefault<T, Default> = T & {
default?: Default
}
export type WithDescription<T> = T & {
description?: string
name: string
warning?: string
}
export type WithOptionalDescription<T> = T & {
/** @deprecated - optional only for backwards compatibility */
description?: string
/** @deprecated - optional only for backwards compatibility */
name?: string
warning?: string
}
export type ListSpec<T> = {
spec: T
range: string
}
export type Tag<T extends string, V> = V & {
type: T
}
export type Subtype<T extends string, V> = V & {
subtype: T
}
export type Target<T extends string, V> = V & {
target: T
}
export type UniqueBy =
| {
any: UniqueBy[]
}
| string
| null
export type WithNullable<T> = T & {
nullable: boolean
}
export type DefaultString =
| string
| {
/** The chars available for the random generation */
charset?: string
/** Length that we generate to */
len: number
}
export type ValueSpecString = // deno-lint-ignore ban-types
(
| {}
| {
pattern: string
"pattern-description": string
}
) & {
copyable?: boolean
masked?: boolean
placeholder?: string
}
export type ValueSpecNumber = {
/** Something like [3,6] or [0, *) */
range?: string
integral?: boolean
/** Used a description of the units */
units?: string
placeholder?: number
}
export type ValueSpecBoolean = Record<string, unknown>
export type ValueSpecAny =
| Tag<"boolean", WithDescription<WithDefault<ValueSpecBoolean, boolean>>>
| Tag<
"string",
WithDescription<
WithNullableDefault<WithNullable<ValueSpecString>, DefaultString>
>
>
| Tag<
"number",
WithDescription<
WithNullableDefault<WithNullable<ValueSpecNumber>, number>
>
>
| Tag<
"enum",
WithDescription<
WithDefault<
{
values: readonly string[] | string[]
"value-names": {
[key: string]: string
}
},
string
>
>
>
| Tag<"list", ValueSpecList>
| Tag<"object", WithDescription<WithNullableDefault<ValueSpecObject, Config>>>
| Tag<"union", WithOptionalDescription<WithDefault<ValueSpecUnion, string>>>
| Tag<
"pointer",
WithDescription<
| Subtype<
"package",
| Target<
"tor-key",
{
"package-id": string
interface: string
}
>
| Target<
"tor-address",
{
"package-id": string
interface: string
}
>
| Target<
"lan-address",
{
"package-id": string
interface: string
}
>
| Target<
"config",
{
"package-id": string
selector: string
multi: boolean
}
>
>
| Subtype<"system", Record<string, unknown>>
>
>
export type ValueSpecUnion = {
/** What tag for the specification, for tag unions */
tag: {
id: string
name: string
description?: string
"variant-names": {
[key: string]: string
}
}
/** The possible enum values */
variants: {
[key: string]: ConfigSpec
}
"display-as"?: string
"unique-by"?: UniqueBy
}
export type ValueSpecObject = {
spec: ConfigSpec
"display-as"?: string
"unique-by"?: UniqueBy
}
export type ValueSpecList =
| Subtype<
"boolean",
WithDescription<WithDefault<ListSpec<ValueSpecBoolean>, boolean[]>>
>
| Subtype<
"string",
WithDescription<WithDefault<ListSpec<ValueSpecString>, string[]>>
>
| Subtype<
"number",
WithDescription<WithDefault<ListSpec<ValueSpecNumber>, number[]>>
>
| Subtype<
"enum",
WithDescription<WithDefault<ListSpec<ValueSpecEnum>, string[]>>
>
| Subtype<
"object",
WithDescription<
WithNullableDefault<
ListSpec<ValueSpecObject>,
Record<string, unknown>[]
>
>
>
| Subtype<
"union",
WithDescription<WithDefault<ListSpec<ValueSpecUnion>, string[]>>
>
export type ValueSpecEnum = {
values: string[]
"value-names": { [key: string]: string }
}
export type SetResult = {
/** These are the unix process signals */
signal:
| "SIGTERM"
| "SIGHUP"
| "SIGINT"
| "SIGQUIT"
| "SIGILL"
| "SIGTRAP"
| "SIGABRT"
| "SIGBUS"
| "SIGFPE"
| "SIGKILL"
| "SIGUSR1"
| "SIGSEGV"
| "SIGUSR2"
| "SIGPIPE"
| "SIGALRM"
| "SIGSTKFLT"
| "SIGCHLD"
| "SIGCONT"
| "SIGSTOP"
| "SIGTSTP"
| "SIGTTIN"
| "SIGTTOU"
| "SIGURG"
| "SIGXCPU"
| "SIGXFSZ"
| "SIGVTALRM"
| "SIGPROF"
| "SIGWINCH"
| "SIGIO"
| "SIGPWR"
| "SIGSYS"
| "SIGEMT"
| "SIGINFO"
"depends-on": DependsOn
}
export type DependsOn = {
[packageId: string]: string[]
}
export type KnownError =
| { error: string }
| {
"error-code": [number, string] | readonly [number, string]
}
export type ResultType<T> = KnownError | { result: T }
export type PackagePropertiesV2 = {
[name: string]: PackagePropertyObject | PackagePropertyString
}
export type PackagePropertyString = {
type: "string"
description?: string
value: string
/** Let's the ui make this copyable button */
copyable?: boolean
/** Let the ui create a qr for this field */
qr?: boolean
/** Hiding the value unless toggled off for field */
masked?: boolean
}
export type PackagePropertyObject = {
value: PackagePropertiesV2
type: "object"
description: string
}
export type Properties = {
version: 2
data: PackagePropertiesV2
}
export type Dependencies = {
/** Id is the id of the package, should be the same as the manifest */
[id: string]: {
/** Checks are called to make sure that our dependency is in the correct shape. If a known error is returned we know that the dependency needs modification */
check(effects: Effects, input: Config): Promise<ResultType<void | null>>
/** This is called after we know that the dependency package needs a new configuration, this would be a transform for defaults */
autoConfigure(effects: Effects, input: Config): Promise<ResultType<Config>>
}
}

View File

@@ -0,0 +1,215 @@
import * as fs from "fs/promises"
import * as oet from "./oldEmbassyTypes"
import { Volume } from "../../../Models/Volume"
import * as child_process from "child_process"
import { promisify } from "util"
import { util, Utils } from "@start9labs/start-sdk"
import { Manifest } from "./matchManifest"
import { HostSystemStartOs } from "../../HostSystemStartOs"
import "isomorphic-fetch"
const { createUtils } = util
const execFile = promisify(child_process.execFile)
export class PolyfillEffects implements oet.Effects {
private utils: Utils<any, any>
constructor(
readonly effects: HostSystemStartOs,
private manifest: Manifest,
) {
this.utils = createUtils(effects as any)
}
async writeFile(input: {
path: string
volumeId: string
toWrite: string
}): Promise<void> {
await fs.writeFile(
new Volume(input.volumeId, input.path).path,
input.toWrite,
)
}
async readFile(input: { volumeId: string; path: string }): Promise<string> {
return (
await fs.readFile(new Volume(input.volumeId, input.path).path)
).toString()
}
async metadata(input: {
volumeId: string
path: string
}): Promise<oet.Metadata> {
const stats = await fs.stat(new Volume(input.volumeId, input.path).path)
return {
fileType: stats.isFile() ? "file" : "directory",
gid: stats.gid,
uid: stats.uid,
mode: stats.mode,
isDir: stats.isDirectory(),
isFile: stats.isFile(),
isSymlink: stats.isSymbolicLink(),
len: stats.size,
readonly: (stats.mode & 0o200) > 0,
}
}
async createDir(input: { volumeId: string; path: string }): Promise<string> {
const path = new Volume(input.volumeId, input.path).path
await fs.mkdir(path, { recursive: true })
return path
}
async readDir(input: { volumeId: string; path: string }): Promise<string[]> {
return fs.readdir(new Volume(input.volumeId, input.path).path)
}
async removeDir(input: { volumeId: string; path: string }): Promise<string> {
const path = new Volume(input.volumeId, input.path).path
await fs.rmdir(new Volume(input.volumeId, input.path).path, {
recursive: true,
})
return path
}
removeFile(input: { volumeId: string; path: string }): Promise<void> {
return fs.rm(new Volume(input.volumeId, input.path).path)
}
async writeJsonFile(input: {
volumeId: string
path: string
toWrite: Record<string, unknown>
}): Promise<void> {
await fs.writeFile(
new Volume(input.volumeId, input.path).path,
JSON.stringify(input.toWrite),
)
}
async readJsonFile(input: {
volumeId: string
path: string
}): Promise<Record<string, unknown>> {
return JSON.parse(
(
await fs.readFile(new Volume(input.volumeId, input.path).path)
).toString(),
)
}
runCommand({
command,
args,
timeoutMillis,
}: {
command: string
args?: string[] | undefined
timeoutMillis?: number | undefined
}): Promise<oet.ResultType<string>> {
return this.utils
.runCommand(this.manifest.main.image, [command, ...(args || [])], {})
.then((x) => ({
stderr: x.stderr.toString(),
stdout: x.stdout.toString(),
}))
.then((x) => (!!x.stderr ? { error: x.stderr } : { result: x.stdout }))
}
runDaemon(input: { command: string; args?: string[] | undefined }): {
wait(): Promise<oet.ResultType<string>>
term(): Promise<void>
} {
throw new Error("Method not implemented.")
}
chown(input: { volumeId: string; path: string; uid: string }): Promise<null> {
throw new Error("Method not implemented.")
}
chmod(input: {
volumeId: string
path: string
mode: string
}): Promise<null> {
throw new Error("Method not implemented.")
}
sleep(timeMs: number): Promise<null> {
return new Promise((resolve) => setTimeout(resolve, timeMs))
}
trace(whatToPrint: string): void {
console.trace(whatToPrint)
}
warn(whatToPrint: string): void {
console.warn(whatToPrint)
}
error(whatToPrint: string): void {
console.error(whatToPrint)
}
debug(whatToPrint: string): void {
console.debug(whatToPrint)
}
info(whatToPrint: string): void {
console.log(false)
}
is_sandboxed(): boolean {
return false
}
exists(input: { volumeId: string; path: string }): Promise<boolean> {
return this.metadata(input)
.then(() => true)
.catch(() => false)
}
bindLocal(options: {
internalPort: number
name: string
externalPort: number
}): Promise<string> {
throw new Error("Method not implemented.")
}
bindTor(options: {
internalPort: number
name: string
externalPort: number
}): Promise<string> {
throw new Error("Method not implemented.")
}
async fetch(
url: string,
options?:
| {
method?:
| "GET"
| "POST"
| "PUT"
| "DELETE"
| "HEAD"
| "PATCH"
| undefined
headers?: Record<string, string> | undefined
body?: string | undefined
}
| undefined,
): Promise<{
method: string
ok: boolean
status: number
headers: Record<string, string>
body?: string | null | undefined
text(): Promise<string>
json(): Promise<unknown>
}> {
const fetched = await fetch(url, options)
return {
method: fetched.type,
ok: fetched.ok,
status: fetched.status,
headers: Object.fromEntries(fetched.headers.entries()),
body: await fetched.text(),
text: () => fetched.text(),
json: () => fetched.json(),
}
}
runRsync(options: {
srcVolume: string
dstVolume: string
srcPath: string
dstPath: string
options: oet.BackupOptions
}): {
id: () => Promise<string>
wait: () => Promise<null>
progress: () => Promise<number>
} {
throw new Error("Method not implemented.")
}
}

View File

@@ -0,0 +1,150 @@
import { ExecuteResult, System } from "../../Interfaces/System"
import { unNestPath } from "../../Models/JsonPath"
import { string } from "ts-matches"
import { HostSystemStartOs } from "../HostSystemStartOs"
import { Effects } from "../../Models/Effects"
const LOCATION = "/usr/lib/startos/package/startos"
export class SystemForStartOs implements System {
private onTerm: (() => Promise<void>) | undefined
static of() {
return new SystemForStartOs()
}
constructor() {}
async execute(
effects: HostSystemStartOs,
options: {
procedure:
| "/init"
| "/uninit"
| "/main/start"
| "/main/stop"
| "/config/set"
| "/config/get"
| "/backup/create"
| "/backup/restore"
| "/actions/metadata"
| `/actions/${string}/get`
| `/actions/${string}/run`
| `/dependencies/${string}/query`
| `/dependencies/${string}/update`
input: unknown
timeout?: number | undefined
},
): Promise<ExecuteResult> {
return { ok: await this._execute(effects, options) }
}
async _execute(
effects: Effects,
options: {
procedure:
| "/init"
| "/uninit"
| "/main/start"
| "/main/stop"
| "/config/set"
| "/config/get"
| "/backup/create"
| "/backup/restore"
| "/actions/metadata"
| `/actions/${string}/get`
| `/actions/${string}/run`
| `/dependencies/${string}/query`
| `/dependencies/${string}/update`
input: unknown
timeout?: number | undefined
},
): Promise<unknown> {
switch (options.procedure) {
case "/init": {
const path = `${LOCATION}/procedures/init`
const procedure: any = await import(path).catch(() => require(path))
const previousVersion = string.optional().unsafeCast(options)
return procedure.init({ effects, previousVersion })
}
case "/uninit": {
const path = `${LOCATION}/procedures/init`
const procedure: any = await import(path).catch(() => require(path))
const nextVersion = string.optional().unsafeCast(options)
return procedure.uninit({ effects, nextVersion })
}
case "/main/start": {
const path = `${LOCATION}/procedures/main`
const procedure: any = await import(path).catch(() => require(path))
const started = async (onTerm: () => Promise<void>) => {
await effects.setMainStatus({ status: "running" })
if (this.onTerm) await this.onTerm()
this.onTerm = onTerm
}
return procedure.main({ effects, started })
}
case "/main/stop": {
await effects.setMainStatus({ status: "stopped" })
if (this.onTerm) await this.onTerm()
delete this.onTerm
return
}
case "/config/set": {
const path = `${LOCATION}/procedures/config`
const procedure: any = await import(path).catch(() => require(path))
const input = options.input
return procedure.setConfig({ effects, input })
}
case "/config/get": {
const path = `${LOCATION}/procedures/config`
const procedure: any = await import(path).catch(() => require(path))
return procedure.getConfig({ effects })
}
case "/backup/create":
case "/backup/restore":
throw new Error("this should be called with the init/unit")
case "/actions/metadata": {
const path = `${LOCATION}/procedures/actions`
const procedure: any = await import(path).catch(() => require(path))
return procedure.actionsMetadata({ effects })
}
default:
const procedures = unNestPath(options.procedure)
const id = procedures[2]
switch (true) {
case procedures[1] === "actions" && procedures[3] === "get": {
const path = `${LOCATION}/procedures/actions`
const action: any = (await import(path).catch(() => require(path)))
.actions[id]
if (!action) throw new Error(`Action ${id} not found`)
return action.get({ effects })
}
case procedures[1] === "actions" && procedures[3] === "run": {
const path = `${LOCATION}/procedures/actions`
const action: any = (await import(path).catch(() => require(path)))
.actions[id]
if (!action) throw new Error(`Action ${id} not found`)
const input = options.input
return action.run({ effects, input })
}
case procedures[1] === "dependencies" && procedures[3] === "query": {
const path = `${LOCATION}/procedures/dependencies`
const dependencyConfig: any = (
await import(path).catch(() => require(path))
).dependencyConfig[id]
if (!dependencyConfig)
throw new Error(`dependencyConfig ${id} not found`)
const localConfig = options.input
return dependencyConfig.query({ effects, localConfig })
}
case procedures[1] === "dependencies" && procedures[3] === "update": {
const path = `${LOCATION}/procedures/dependencies`
const dependencyConfig: any = (
await import(path).catch(() => require(path))
).dependencyConfig[id]
if (!dependencyConfig)
throw new Error(`dependencyConfig ${id} not found`)
return dependencyConfig.update(options.input)
}
}
}
throw new Error("Method not implemented.")
}
exit(effects: Effects): Promise<void> {
throw new Error("Method not implemented.")
}
}

View File

@@ -0,0 +1,6 @@
import { System } from "../../Interfaces/System"
import { SystemForEmbassy } from "./SystemForEmbassy"
import { SystemForStartOs } from "./SystemForStartOs"
export async function getSystem(): Promise<System> {
return SystemForEmbassy.of()
}

View File

@@ -0,0 +1,6 @@
import { GetDependency } from "./GetDependency"
import { System } from "./System"
import { GetHostSystem, HostSystem } from "./HostSystem"
export type AllGetDependencies = GetDependency<"system", Promise<System>> &
GetDependency<"hostSystem", GetHostSystem>

View File

@@ -0,0 +1,3 @@
export type GetDependency<K extends string, T> = {
[OtherK in K]: () => T
}

View File

@@ -0,0 +1,7 @@
import { types as T } from "@start9labs/start-sdk"
import { CallbackHolder } from "../Models/CallbackHolder"
import { Effects } from "../Models/Effects"
export type HostSystem = Effects
export type GetHostSystem = (callbackHolder: CallbackHolder) => HostSystem

View File

@@ -0,0 +1,31 @@
import { types as T } from "@start9labs/start-sdk"
import { JsonPath } from "../Models/JsonPath"
import { HostSystemStartOs } from "../Adapters/HostSystemStartOs"
export type ExecuteResult =
| { ok: unknown }
| { err: { code: number; message: string } }
export interface System {
// init(effects: Effects): Promise<void>
// exit(effects: Effects): Promise<void>
// start(effects: Effects): Promise<void>
// stop(effects: Effects, options: { timeout: number, signal?: number }): Promise<void>
execute(
effects: T.Effects,
options: {
procedure: JsonPath
input: unknown
timeout?: number
},
): Promise<ExecuteResult>
// sandbox(
// effects: Effects,
// options: {
// procedure: JsonPath
// input: unknown
// timeout?: number
// },
// ): Promise<unknown>
exit(effects: T.Effects): Promise<void>
}

View File

@@ -0,0 +1,18 @@
export class CallbackHolder {
constructor() {}
private root = (Math.random() + 1).toString(36).substring(7)
private inc = 0
private callbacks = new Map<string, Function>()
private newId() {
return this.root + (this.inc++).toString(36)
}
addCallback(callback: Function) {
return this.callbacks.set(this.newId(), callback)
}
callCallback(index: string, args: any[]): Promise<unknown> {
const callback = this.callbacks.get(index)
if (!callback) throw new Error(`Callback ${index} does not exist`)
this.callbacks.delete(index)
return Promise.resolve().then(() => callback(...args))
}
}

View File

@@ -0,0 +1,45 @@
import {
object,
literal,
string,
boolean,
array,
dictionary,
literals,
number,
Parser,
} from "ts-matches"
const VolumeId = string
const Path = string
export type VolumeId = string
export type Path = string
export const matchDockerProcedure = object(
{
type: literal("docker"),
image: string,
system: boolean,
entrypoint: string,
args: array(string),
mounts: dictionary([VolumeId, Path]),
"io-format": literals(
"json",
"json-pretty",
"yaml",
"cbor",
"toml",
"toml-pretty",
),
"sigterm-timeout": number,
inject: boolean,
},
["io-format", "sigterm-timeout", "system", "args", "inject", "mounts"],
{
"sigterm-timeout": 30,
inject: false,
args: [],
},
)
export type DockerProcedure = typeof matchDockerProcedure._TYPE

View File

@@ -0,0 +1,5 @@
import { types as T } from "@start9labs/start-sdk"
export type Effects = T.Effects & {
setMainStatus(o: { status: "running" | "stopped" }): Promise<void>
}

View File

@@ -0,0 +1,42 @@
import { literals, some, string } from "ts-matches"
type NestedPath<A extends string, B extends string> = `/${A}/${string}/${B}`
type NestedPaths =
| NestedPath<"actions", "run" | "get">
| NestedPath<"dependencies", "query" | "update">
// prettier-ignore
type UnNestPaths<A> =
A extends `${infer A}/${infer B}` ? [...UnNestPaths<A>, ... UnNestPaths<B>] :
[A]
export function unNestPath<A extends string>(a: A): UnNestPaths<A> {
return a.split("/") as UnNestPaths<A>
}
function isNestedPath(path: string): path is NestedPaths {
const paths = path.split("/")
if (paths.length !== 4) return false
if (paths[1] === "action" && (paths[3] === "run" || paths[3] === "get"))
return true
if (
paths[1] === "dependencyConfig" &&
(paths[3] === "query" || paths[3] === "update")
)
return true
return false
}
export const jsonPath = some(
literals(
"/init",
"/uninit",
"/main/start",
"/main/stop",
"/config/set",
"/config/get",
"/backup/create",
"/backup/restore",
"/actions/metadata",
),
string.refine(isNestedPath, "isNestedPath"),
)
export type JsonPath = typeof jsonPath._TYPE

View File

@@ -0,0 +1,19 @@
import * as fs from "node:fs/promises"
export class Volume {
readonly path: string
constructor(
readonly volumeId: string,
_path = "",
) {
const path = (this.path = `/media/startos/volumes/${volumeId}${
!_path ? "" : `/${_path}`
}`)
}
async exists() {
return fs.stat(this.path).then(
() => true,
() => false,
)
}
}

View File

@@ -1,6 +1,15 @@
import { Runtime } from "./Runtime"
import { RpcListener } from "./Adapters/RpcListener"
import { SystemForEmbassy } from "./Adapters/Systems/SystemForEmbassy"
import { HostSystemStartOs } from "./Adapters/HostSystemStartOs"
import { AllGetDependencies } from "./Interfaces/AllGetDependencies"
import { getSystem } from "./Adapters/Systems"
new Runtime()
const getDependencies: AllGetDependencies = {
system: getSystem,
hostSystem: () => HostSystemStartOs.of,
}
new RpcListener(getDependencies)
/**

View File

@@ -2,20 +2,25 @@
"include": [
"./**/*.mjs",
"./**/*.js",
"initSrc/Runtime.ts",
"initSrc/index.ts",
"src/Adapters/RpcListener.ts",
"src/index.ts",
"effects.ts"
],
"exclude": [],
"inputs": ["./lib/index.ts"],
"exclude": ["dist"],
"inputs": ["./src/index.ts"],
"compilerOptions": {
"target": "es2022",
"module": "es2022",
"moduleResolution": "node",
"allowJs": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"module": "Node16",
"strict": true,
"outDir": "dist",
"preserveConstEnums": true,
"sourceMap": true,
"target": "ES2022",
"pretty": true,
"declaration": true,
"noImplicitAny": true,
"esModuleInterop": true,
"types": ["node"],
"moduleResolution": "Node16",
"skipLibCheck": true
},
"ts-node": {

View File

@@ -0,0 +1,41 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
if mountpoint tmp/combined; then sudo umount tmp/combined; fi
if mountpoint tmp/lower; then sudo umount tmp/lower; fi
mkdir -p tmp/lower tmp/upper tmp/work tmp/combined
sudo mount alpine.squashfs tmp/lower
sudo mount -t overlay -olowerdir=tmp/lower,upperdir=tmp/upper,workdir=tmp/work overlay tmp/combined
QEMU=
if [ "$ARCH" != "$(uname -m)" ]; then
QEMU=/usr/bin/qemu-${ARCH}-static
sudo cp $(which qemu-$ARCH-static) tmp/combined${QEMU}
fi
echo "nameserver 8.8.8.8" | sudo tee tmp/combined/etc/resolv.conf # TODO - delegate to host resolver?
sudo chroot tmp/combined $QEMU /sbin/apk add nodejs
sudo mkdir -p tmp/combined/usr/lib/startos/
sudo rsync -a --copy-unsafe-links dist/ tmp/combined/usr/lib/startos/init/
sudo cp containerRuntime.rc tmp/combined/etc/init.d/containerRuntime
sudo cp ../core/target/$ARCH-unknown-linux-musl/release/containerbox tmp/combined/usr/bin/start-cli
sudo chmod +x tmp/combined/etc/init.d/containerRuntime
sudo chroot tmp/combined $QEMU /sbin/rc-update add containerRuntime default
if [ -n "$QEMU" ]; then
sudo rm tmp/combined${QEMU}
fi
sudo truncate -s 0 tmp/combined/etc/resolv.conf
sudo chown -R 0:0 tmp/combined
rm -f ../build/lib/container-runtime/rootfs.squashfs
mkdir -p ../build/lib/container-runtime
sudo mksquashfs tmp/combined ../build/lib/container-runtime/rootfs.squashfs
sudo umount tmp/combined
sudo umount tmp/lower
sudo rm -rf tmp

2797
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,3 @@
[workspace]
members = ["container-init", "helpers", "models", "snapshot-creator", "startos"]
members = ["helpers", "models", "startos"]

View File

@@ -8,9 +8,6 @@
## Structure
- `startos`: This contains the core library for StartOS that supports building `startbox`.
- `container-init` (ignore: deprecated)
- `js-engine`: This contains the library required to build `deno` to support running `.js` maintainer scripts for v0.3
- `snapshot-creator`: This contains a binary used to build `v8` runtime snapshots, required for initializing `start-deno`
- `helpers`: This contains utility functions used across both `startos` and `js-engine`
- `models`: This contains types that are shared across `startos`, `js-engine`, and `helpers`
@@ -24,8 +21,6 @@ several different names for different behaviour:
`startd` and control it similarly to the UI
- `start-sdk`: This is a CLI tool that aids in building and packaging services
you wish to deploy to StartOS
- `start-deno`: This is a CLI tool invoked by startd to run `.js` maintainer scripts for v0.3
- `avahi-alias`: This is a CLI tool invoked by startd to create aliases in `avahi` for mDNS
## Questions

View File

@@ -18,22 +18,22 @@ cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
alias 'rust-gnu-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$(pwd)":/home/rust/src -w /home/rust/src -P start9/rust-arm-cross:aarch64'
alias 'rust-musl-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
set +e
fail=
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
if ! rust-gnu-builder sh -c "(cd core && cargo build --release --features avahi-alias,$FEATURES --locked --bin startbox --target=$ARCH-unknown-linux-gnu)"; then
if ! rust-musl-builder sh -c "(cd core && cargo build --release $(if [ -n "$FEATURES" ]; then echo "--features $FEATURES"; fi) --locked --bin startbox --target=$ARCH-unknown-linux-musl)"; then
fail=true
fi
if ! rust-musl-builder sh -c "(cd core && cargo build --release --no-default-features --features container-runtime,$FEATURES --locked --bin containerbox --target=$ARCH-unknown-linux-musl)"; then
fail=true
fi
for ARCH in x86_64 aarch64
do
if ! rust-musl-builder sh -c "(cd core && cargo build --release --locked --bin container-init)"; then
fail=true
fi
done
set -e
cd core

View File

@@ -1,39 +0,0 @@
#!/bin/bash
# Reason for this being is that we need to create a snapshot for the deno runtime. It wants to pull 3 files from build, and during the creation it gets embedded, but for some
# reason during the actual runtime it is looking for them. So this will create a docker in arm that creates the snaphot needed for the arm
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
shopt -s expand_aliases
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-gnu-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$(pwd)":/home/rust/src -w /home/rust/src -P start9/rust-arm-cross:aarch64'
echo "Building "
cd ..
rust-gnu-builder sh -c "(cd core/ && cargo build -p snapshot_creator --release --target=${ARCH}-unknown-linux-gnu)"
cd -
if [ "$ARCH" = "aarch64" ]; then
DOCKER_ARCH='arm64/v8'
elif [ "$ARCH" = "x86_64" ]; then
DOCKER_ARCH='amd64'
fi
echo "Creating Arm v8 Snapshot"
docker run $USE_TTY --platform "linux/${DOCKER_ARCH}" --mount type=bind,src=$(pwd),dst=/mnt ubuntu:22.04 /bin/sh -c "cd /mnt && /mnt/target/${ARCH}-unknown-linux-gnu/release/snapshot_creator"
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo
sudo chown $USER JS_SNAPSHOT.bin
sudo chmod 0644 JS_SNAPSHOT.bin
sudo mv -f JS_SNAPSHOT.bin ./js-engine/src/artifacts/JS_SNAPSHOT.${ARCH}.bin

View File

@@ -1,39 +0,0 @@
[package]
name = "container-init"
version = "0.1.0"
edition = "2021"
rust = "1.66"
[features]
dev = []
metal = []
sound = []
unstable = []
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
async-stream = "0.3"
# cgroups-rs = "0.2"
color-eyre = "0.6"
futures = "0.3"
serde = { version = "1", features = ["derive", "rc"] }
serde_json = "1"
helpers = { path = "../helpers" }
imbl = "2"
nix = { version = "0.27", features = ["process", "signal"] }
tokio = { version = "1", features = ["full"] }
tokio-stream = { version = "0.1", features = ["io-util", "sync", "net"] }
tracing = "0.1"
tracing-error = "0.2"
tracing-futures = "0.2"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
yajrc = { version = "*", git = "https://github.com/dr-bonez/yajrc.git", branch = "develop" }
[target.'cfg(target_os = "linux")'.dependencies]
procfs = "0.15"
[profile.test]
opt-level = 3
[profile.dev.package.backtrace]
opt-level = 3

View File

@@ -1,214 +0,0 @@
use nix::unistd::Pid;
use serde::{Deserialize, Serialize, Serializer};
use yajrc::RpcMethod;
/// Know what the process is called
#[derive(Debug, Serialize, Deserialize, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct ProcessId(pub u32);
impl From<ProcessId> for Pid {
fn from(pid: ProcessId) -> Self {
Pid::from_raw(pid.0 as i32)
}
}
impl From<Pid> for ProcessId {
fn from(pid: Pid) -> Self {
ProcessId(pid.as_raw() as u32)
}
}
impl From<i32> for ProcessId {
fn from(pid: i32) -> Self {
ProcessId(pid as u32)
}
}
#[derive(Debug, Serialize, Deserialize, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct ProcessGroupId(pub u32);
#[derive(Debug, Serialize, Deserialize, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[serde(rename_all = "kebab-case")]
pub enum OutputStrategy {
Inherit,
Collect,
}
#[derive(Debug, Clone, Copy)]
pub struct RunCommand;
impl Serialize for RunCommand {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RunCommandParams {
pub gid: Option<ProcessGroupId>,
pub command: String,
pub args: Vec<String>,
pub output: OutputStrategy,
}
impl RpcMethod for RunCommand {
type Params = RunCommandParams;
type Response = ProcessId;
fn as_str<'a>(&'a self) -> &'a str {
"command"
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum LogLevel {
Trace(String),
Warn(String),
Error(String),
Info(String),
Debug(String),
}
impl LogLevel {
pub fn trace(&self) {
match self {
LogLevel::Trace(x) => tracing::trace!("{}", x),
LogLevel::Warn(x) => tracing::warn!("{}", x),
LogLevel::Error(x) => tracing::error!("{}", x),
LogLevel::Info(x) => tracing::info!("{}", x),
LogLevel::Debug(x) => tracing::debug!("{}", x),
}
}
}
#[derive(Debug, Clone, Copy)]
pub struct Log;
impl Serialize for Log {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LogParams {
pub gid: Option<ProcessGroupId>,
pub level: LogLevel,
}
impl RpcMethod for Log {
type Params = LogParams;
type Response = ();
fn as_str<'a>(&'a self) -> &'a str {
"log"
}
}
#[derive(Debug, Clone, Copy)]
pub struct ReadLineStdout;
impl Serialize for ReadLineStdout {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReadLineStdoutParams {
pub pid: ProcessId,
}
impl RpcMethod for ReadLineStdout {
type Params = ReadLineStdoutParams;
type Response = String;
fn as_str<'a>(&'a self) -> &'a str {
"read-line-stdout"
}
}
#[derive(Debug, Clone, Copy)]
pub struct ReadLineStderr;
impl Serialize for ReadLineStderr {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReadLineStderrParams {
pub pid: ProcessId,
}
impl RpcMethod for ReadLineStderr {
type Params = ReadLineStderrParams;
type Response = String;
fn as_str<'a>(&'a self) -> &'a str {
"read-line-stderr"
}
}
#[derive(Debug, Clone, Copy)]
pub struct Output;
impl Serialize for Output {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OutputParams {
pub pid: ProcessId,
}
impl RpcMethod for Output {
type Params = OutputParams;
type Response = String;
fn as_str<'a>(&'a self) -> &'a str {
"output"
}
}
#[derive(Debug, Clone, Copy)]
pub struct SendSignal;
impl Serialize for SendSignal {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SendSignalParams {
pub pid: ProcessId,
pub signal: u32,
}
impl RpcMethod for SendSignal {
type Params = SendSignalParams;
type Response = ();
fn as_str<'a>(&'a self) -> &'a str {
"signal"
}
}
#[derive(Debug, Clone, Copy)]
pub struct SignalGroup;
impl Serialize for SignalGroup {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SignalGroupParams {
pub gid: ProcessGroupId,
pub signal: u32,
}
impl RpcMethod for SignalGroup {
type Params = SignalGroupParams;
type Response = ();
fn as_str<'a>(&'a self) -> &'a str {
"signal-group"
}
}

View File

@@ -1,428 +0,0 @@
use std::collections::BTreeMap;
use std::ops::DerefMut;
use std::os::unix::process::ExitStatusExt;
use std::process::Stdio;
use std::sync::Arc;
use container_init::{
LogParams, OutputParams, OutputStrategy, ProcessGroupId, ProcessId, RunCommandParams,
SendSignalParams, SignalGroupParams,
};
use futures::StreamExt;
use helpers::NonDetachingJoinHandle;
use nix::errno::Errno;
use nix::sys::signal::Signal;
use serde::{Deserialize, Serialize};
use serde_json::json;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
use tokio::process::{Child, Command};
use tokio::select;
use tokio::sync::{watch, Mutex};
use yajrc::{Id, RpcError};
/// Outputs embedded in the JSONRpc output of the executable.
#[derive(Debug, Clone, Serialize)]
#[serde(untagged)]
enum Output {
Command(ProcessId),
ReadLineStdout(String),
ReadLineStderr(String),
Output(String),
Log,
Signal,
SignalGroup,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "method", content = "params", rename_all = "kebab-case")]
enum Input {
/// Run a new command, with the args
Command(RunCommandParams),
/// Want to log locall on the service rather than the eos
Log(LogParams),
// /// Get a line of stdout from the command
// ReadLineStdout(ReadLineStdoutParams),
// /// Get a line of stderr from the command
// ReadLineStderr(ReadLineStderrParams),
/// Get output of command
Output(OutputParams),
/// Send the sigterm to the process
Signal(SendSignalParams),
/// Signal a group of processes
SignalGroup(SignalGroupParams),
}
#[derive(Deserialize)]
struct IncomingRpc {
id: Id,
#[serde(flatten)]
input: Input,
}
struct ChildInfo {
gid: Option<ProcessGroupId>,
child: Arc<Mutex<Option<Child>>>,
output: Option<InheritOutput>,
}
struct InheritOutput {
_thread: NonDetachingJoinHandle<()>,
stdout: watch::Receiver<String>,
stderr: watch::Receiver<String>,
}
struct HandlerMut {
processes: BTreeMap<ProcessId, ChildInfo>,
// groups: BTreeMap<ProcessGroupId, Cgroup>,
}
#[derive(Clone)]
struct Handler {
children: Arc<Mutex<HandlerMut>>,
}
impl Handler {
fn new() -> Self {
Handler {
children: Arc::new(Mutex::new(HandlerMut {
processes: BTreeMap::new(),
// groups: BTreeMap::new(),
})),
}
}
async fn handle(&self, req: Input) -> Result<Output, RpcError> {
Ok(match req {
Input::Command(RunCommandParams {
gid,
command,
args,
output,
}) => Output::Command(self.command(gid, command, args, output).await?),
// Input::ReadLineStdout(ReadLineStdoutParams { pid }) => {
// Output::ReadLineStdout(self.read_line_stdout(pid).await?)
// }
// Input::ReadLineStderr(ReadLineStderrParams { pid }) => {
// Output::ReadLineStderr(self.read_line_stderr(pid).await?)
// }
Input::Log(LogParams { gid: _, level }) => {
level.trace();
Output::Log
}
Input::Output(OutputParams { pid }) => Output::Output(self.output(pid).await?),
Input::Signal(SendSignalParams { pid, signal }) => {
self.signal(pid, signal).await?;
Output::Signal
}
Input::SignalGroup(SignalGroupParams { gid, signal }) => {
self.signal_group(gid, signal).await?;
Output::SignalGroup
}
})
}
async fn command(
&self,
gid: Option<ProcessGroupId>,
command: String,
args: Vec<String>,
output: OutputStrategy,
) -> Result<ProcessId, RpcError> {
let mut cmd = Command::new(command);
cmd.args(args);
cmd.kill_on_drop(true);
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
let mut child = cmd.spawn().map_err(|e| {
let mut err = yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!(e.to_string()));
err
})?;
let pid = ProcessId(child.id().ok_or_else(|| {
let mut err = yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!("Child has no pid"));
err
})?);
let output = match output {
OutputStrategy::Inherit => {
let (stdout_send, stdout) = watch::channel(String::new());
let (stderr_send, stderr) = watch::channel(String::new());
if let (Some(child_stdout), Some(child_stderr)) =
(child.stdout.take(), child.stderr.take())
{
Some(InheritOutput {
_thread: tokio::spawn(async move {
tokio::join!(
async {
if let Err(e) = async {
let mut lines = BufReader::new(child_stdout).lines();
while let Some(line) = lines.next_line().await? {
tracing::info!("({}): {}", pid.0, line);
let _ = stdout_send.send(line);
}
Ok::<_, std::io::Error>(())
}
.await
{
tracing::error!(
"Error reading stdout of pid {}: {}",
pid.0,
e
);
}
},
async {
if let Err(e) = async {
let mut lines = BufReader::new(child_stderr).lines();
while let Some(line) = lines.next_line().await? {
tracing::warn!("({}): {}", pid.0, line);
let _ = stderr_send.send(line);
}
Ok::<_, std::io::Error>(())
}
.await
{
tracing::error!(
"Error reading stdout of pid {}: {}",
pid.0,
e
);
}
}
);
})
.into(),
stdout,
stderr,
})
} else {
None
}
}
OutputStrategy::Collect => None,
};
self.children.lock().await.processes.insert(
pid,
ChildInfo {
gid,
child: Arc::new(Mutex::new(Some(child))),
output,
},
);
Ok(pid)
}
async fn output(&self, pid: ProcessId) -> Result<String, RpcError> {
let not_found = || {
let mut err = yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!(format!("Child with pid {} not found", pid.0)));
err
};
let mut child = {
self.children
.lock()
.await
.processes
.get(&pid)
.ok_or_else(not_found)?
.child
.clone()
}
.lock_owned()
.await;
if let Some(child) = child.take() {
let output = child.wait_with_output().await?;
if output.status.success() {
Ok(String::from_utf8(output.stdout).map_err(|_| yajrc::PARSE_ERROR)?)
} else {
Err(RpcError {
code: output
.status
.code()
.or_else(|| output.status.signal().map(|s| 128 + s))
.unwrap_or(0),
message: "Command failed".into(),
data: Some(json!(String::from_utf8(if output.stderr.is_empty() {
output.stdout
} else {
output.stderr
})
.map_err(|_| yajrc::PARSE_ERROR)?)),
})
}
} else {
Err(not_found())
}
}
async fn signal(&self, pid: ProcessId, signal: u32) -> Result<(), RpcError> {
let not_found = || {
let mut err = yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!(format!("Child with pid {} not found", pid.0)));
err
};
Self::killall(pid, Signal::try_from(signal as i32)?)?;
if signal == 9 {
self.children
.lock()
.await
.processes
.remove(&pid)
.ok_or_else(not_found)?;
}
Ok(())
}
async fn signal_group(&self, gid: ProcessGroupId, signal: u32) -> Result<(), RpcError> {
let mut to_kill = Vec::new();
{
let mut children_ref = self.children.lock().await;
let children = std::mem::take(&mut children_ref.deref_mut().processes);
for (pid, child_info) in children {
if child_info.gid == Some(gid) {
to_kill.push(pid);
} else {
children_ref.processes.insert(pid, child_info);
}
}
}
for pid in to_kill {
tracing::info!("Killing pid {}", pid.0);
Self::killall(pid, Signal::try_from(signal as i32)?)?;
}
Ok(())
}
fn killall(pid: ProcessId, signal: Signal) -> Result<(), RpcError> {
for proc in procfs::process::all_processes()? {
let stat = proc?.stat()?;
if ProcessId::from(stat.ppid) == pid {
Self::killall(stat.pid.into(), signal)?;
}
}
if let Err(e) = nix::sys::signal::kill(pid.into(), Some(signal)) {
if e != Errno::ESRCH {
tracing::error!("Failed to kill pid {}: {}", pid.0, e);
}
}
Ok(())
}
async fn graceful_exit(self) {
let kill_all = futures::stream::iter(
std::mem::take(&mut self.children.lock().await.deref_mut().processes).into_iter(),
)
.for_each_concurrent(None, |(pid, child)| async move {
let _ = Self::killall(pid, Signal::SIGTERM);
if let Some(child) = child.child.lock().await.take() {
let _ = child.wait_with_output().await;
}
});
kill_all.await
}
}
#[tokio::main]
async fn main() {
use tokio::signal::unix::{signal, SignalKind};
let mut sigint = signal(SignalKind::interrupt()).unwrap();
let mut sigterm = signal(SignalKind::terminate()).unwrap();
let mut sigquit = signal(SignalKind::quit()).unwrap();
let mut sighangup = signal(SignalKind::hangup()).unwrap();
use tracing_error::ErrorLayer;
use tracing_subscriber::prelude::*;
use tracing_subscriber::{fmt, EnvFilter};
let filter_layer = EnvFilter::new("container_init=debug");
let fmt_layer = fmt::layer().with_target(true);
tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.with(ErrorLayer::default())
.init();
color_eyre::install().unwrap();
let handler = Handler::new();
let handler_thread = async {
let listener = tokio::net::UnixListener::bind("/start9/sockets/rpc.sock")?;
loop {
let (stream, _) = listener.accept().await?;
let (r, w) = stream.into_split();
let mut lines = BufReader::new(r).lines();
let handler = handler.clone();
tokio::spawn(async move {
let w = Arc::new(Mutex::new(w));
while let Some(line) = lines.next_line().await.transpose() {
let handler = handler.clone();
let w = w.clone();
tokio::spawn(async move {
if let Err(e) = async {
let req = serde_json::from_str::<IncomingRpc>(&line?)?;
match handler.handle(req.input).await {
Ok(output) => {
if w.lock().await.write_all(
format!("{}\n", json!({ "id": req.id, "jsonrpc": "2.0", "result": output }))
.as_bytes(),
)
.await.is_err() {
tracing::error!("Error sending to {id:?}", id = req.id);
}
}
Err(e) =>
if w
.lock()
.await
.write_all(
format!("{}\n", json!({ "id": req.id, "jsonrpc": "2.0", "error": e }))
.as_bytes(),
)
.await.is_err() {
tracing::error!("Handle + Error sending to {id:?}", id = req.id);
},
}
Ok::<_, color_eyre::Report>(())
}
.await
{
tracing::error!("Error parsing RPC request: {}", e);
tracing::debug!("{:?}", e);
}
});
}
Ok::<_, std::io::Error>(())
});
}
#[allow(unreachable_code)]
Ok::<_, std::io::Error>(())
};
select! {
res = handler_thread => {
match res {
Ok(()) => tracing::debug!("Done with inputs/outputs"),
Err(e) => {
tracing::error!("Error reading RPC input: {}", e);
tracing::debug!("{:?}", e);
}
}
},
_ = sigint.recv() => {
tracing::debug!("SIGINT");
},
_ = sigterm.recv() => {
tracing::debug!("SIGTERM");
},
_ = sigquit.recv() => {
tracing::debug!("SIGQUIT");
},
_ = sighangup.recv() => {
tracing::debug!("SIGHUP");
}
}
handler.graceful_exit().await;
::std::process::exit(0)
}

View File

@@ -11,9 +11,9 @@ futures = "0.3.28"
lazy_async_pool = "0.3.3"
models = { path = "../models" }
pin-project = "1.1.3"
rpc-toolkit = "0.2.3"
serde = { version = "1.0", features = ["derive", "rc"] }
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-stream = { version = "0.1.14", features = ["io-util", "sync"] }
tracing = "0.1.39"
yajrc = { version = "*", git = "https://github.com/dr-bonez/yajrc.git", branch = "develop" }

View File

@@ -11,11 +11,9 @@ use tokio::sync::oneshot;
use tokio::task::{JoinError, JoinHandle, LocalSet};
mod byte_replacement_reader;
mod rpc_client;
mod rsync;
mod script_dir;
pub use byte_replacement_reader::*;
pub use rpc_client::{RpcClient, UnixRpcClient};
pub use rsync::*;
pub use script_dir::*;

View File

@@ -12,7 +12,4 @@ if [ -z "$PLATFORM" ]; then
export PLATFORM=$(uname -m)
fi
cargo install --path=./startos --no-default-features --features=js_engine,sdk,cli --locked
startbox_loc=$(which startbox)
ln -sf $startbox_loc $(dirname $startbox_loc)/start-cli
ln -sf $startbox_loc $(dirname $startbox_loc)/start-sdk
cargo install --path=./startos --no-default-features --features=cli,docker --bin start-cli --locked

View File

@@ -15,6 +15,7 @@ emver = { version = "0.1", git = "https://github.com/Start9Labs/emver-rs.git", f
"serde",
] }
ipnet = "2.8.0"
num_enum = "0.7.1"
openssl = { version = "0.10.57", features = ["vendored"] }
patch-db = { version = "*", path = "../../patch-db/patch-db", features = [
"trace",

View File

@@ -1,14 +1,19 @@
use std::fmt::Display;
use std::fmt::{Debug, Display};
use color_eyre::eyre::eyre;
use num_enum::TryFromPrimitive;
use patch_db::Revision;
use rpc_toolkit::hyper::http::uri::InvalidUri;
use rpc_toolkit::reqwest;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::yajrc::{
RpcError, INVALID_PARAMS_ERROR, INVALID_REQUEST_ERROR, METHOD_NOT_FOUND_ERROR, PARSE_ERROR,
};
use serde::{Deserialize, Serialize};
use crate::InvalidId;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, TryFromPrimitive)]
#[repr(i32)]
pub enum ErrorKind {
Unknown = 1,
Filesystem = 2,
@@ -81,6 +86,8 @@ pub enum ErrorKind {
CpuSettings = 69,
Firmware = 70,
Timeout = 71,
Lxc = 72,
Cancelled = 73,
}
impl ErrorKind {
pub fn as_str(&self) -> &'static str {
@@ -157,6 +164,8 @@ impl ErrorKind {
CpuSettings => "CPU Settings Error",
Firmware => "Firmware Error",
Timeout => "Timeout Error",
Lxc => "LXC Error",
Cancelled => "Cancelled",
}
}
}
@@ -186,6 +195,17 @@ impl Error {
revision: None,
}
}
pub fn clone_output(&self) -> Self {
Error {
source: ErrorData {
details: format!("{}", self.source),
debug: format!("{:?}", self.source),
}
.into(),
kind: self.kind,
revision: self.revision.clone(),
}
}
}
impl From<InvalidId> for Error {
fn from(err: InvalidId) -> Self {
@@ -300,6 +320,53 @@ impl From<patch_db::value::Error> for Error {
}
}
#[derive(Clone, Deserialize, Serialize)]
pub struct ErrorData {
pub details: String,
pub debug: String,
}
impl Display for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.details, f)
}
}
impl Debug for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.debug, f)
}
}
impl std::error::Error for ErrorData {}
impl From<&RpcError> for ErrorData {
fn from(value: &RpcError) -> Self {
Self {
details: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("details")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
debug: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("debug")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
}
}
}
impl From<Error> for RpcError {
fn from(e: Error) -> Self {
let mut data_object = serde_json::Map::with_capacity(3);
@@ -318,10 +385,40 @@ impl From<Error> for RpcError {
RpcError {
code: e.kind as i32,
message: e.kind.as_str().into(),
data: Some(data_object.into()),
data: Some(
match serde_json::to_value(&ErrorData {
details: format!("{}", e.source),
debug: format!("{:?}", e.source),
}) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
),
}
}
}
impl From<RpcError> for Error {
fn from(e: RpcError) -> Self {
Error::new(
ErrorData::from(&e),
if let Ok(kind) = e.code.try_into() {
kind
} else if e.code == METHOD_NOT_FOUND_ERROR.code {
ErrorKind::NotFound
} else if e.code == PARSE_ERROR.code
|| e.code == INVALID_PARAMS_ERROR.code
|| e.code == INVALID_REQUEST_ERROR.code
{
ErrorKind::Deserialization
} else {
ErrorKind::Unknown
},
)
}
}
#[derive(Debug, Default)]
pub struct ErrorCollection(Vec<Error>);
@@ -377,10 +474,7 @@ where
Self: Sized,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error>;
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display + Send + Sync + 'static>(
self,
f: F,
) -> Result<T, Error>;
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error>;
}
impl<T, E> ResultExt<T, E> for Result<T, E>
where
@@ -394,10 +488,7 @@ where
})
}
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display + Send + Sync + 'static>(
self,
f: F,
) -> Result<T, Error> {
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = color_eyre::eyre::Error::from(e);
@@ -411,6 +502,29 @@ where
})
}
}
impl<T> ResultExt<T, Error> for Result<T, Error> {
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error {
source: e.source,
kind,
revision: e.revision,
})
}
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = e.source;
let ctx = format!("{}: {}", ctx, source);
let source = source.wrap_err(ctx);
Error {
kind,
source,
revision: e.revision,
}
})
}
}
pub trait OptionExt<T>
where

View File

@@ -1,4 +1,5 @@
use std::fmt::Debug;
use std::path::Path;
use std::str::FromStr;
use serde::{Deserialize, Deserializer, Serialize};
@@ -7,6 +8,11 @@ use crate::{Id, InvalidId, PackageId, Version};
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize)]
pub struct ImageId(Id);
impl AsRef<Path> for ImageId {
fn as_ref(&self) -> &Path {
self.0.as_ref().as_ref()
}
}
impl std::fmt::Display for ImageId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", &self.0)

View File

@@ -4,54 +4,37 @@ use crate::{ActionId, HealthCheckId, PackageId};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum ProcedureName {
Main, // Usually just run container
CreateBackup,
RestoreBackup,
StartMain,
StopMain,
GetConfig,
SetConfig,
Migration,
Properties,
LongRunning,
Check(PackageId),
AutoConfig(PackageId),
Health(HealthCheckId),
Action(ActionId),
Signal,
CreateBackup,
RestoreBackup,
ActionMetadata,
RunAction(ActionId),
GetAction(ActionId),
QueryDependency(ActionId),
UpdateDependency(ActionId),
Init,
Uninit,
}
impl ProcedureName {
pub fn docker_name(&self) -> Option<String> {
pub fn js_function_name(&self) -> String {
match self {
ProcedureName::Main => None,
ProcedureName::LongRunning => None,
ProcedureName::CreateBackup => Some("CreateBackup".to_string()),
ProcedureName::RestoreBackup => Some("RestoreBackup".to_string()),
ProcedureName::GetConfig => Some("GetConfig".to_string()),
ProcedureName::SetConfig => Some("SetConfig".to_string()),
ProcedureName::Migration => Some("Migration".to_string()),
ProcedureName::Properties => Some(format!("Properties-{}", rand::random::<u64>())),
ProcedureName::Health(id) => Some(format!("{}Health", id)),
ProcedureName::Action(id) => Some(format!("{}Action", id)),
ProcedureName::Check(_) => None,
ProcedureName::AutoConfig(_) => None,
ProcedureName::Signal => None,
}
}
pub fn js_function_name(&self) -> Option<String> {
match self {
ProcedureName::Main => Some("/main".to_string()),
ProcedureName::LongRunning => None,
ProcedureName::CreateBackup => Some("/createBackup".to_string()),
ProcedureName::RestoreBackup => Some("/restoreBackup".to_string()),
ProcedureName::GetConfig => Some("/getConfig".to_string()),
ProcedureName::SetConfig => Some("/setConfig".to_string()),
ProcedureName::Migration => Some("/migration".to_string()),
ProcedureName::Properties => Some("/properties".to_string()),
ProcedureName::Health(id) => Some(format!("/health/{}", id)),
ProcedureName::Action(id) => Some(format!("/action/{}", id)),
ProcedureName::Check(id) => Some(format!("/dependencies/{}/check", id)),
ProcedureName::AutoConfig(id) => Some(format!("/dependencies/{}/autoConfigure", id)),
ProcedureName::Signal => Some("/handleSignal".to_string()),
ProcedureName::Init => "/init".to_string(),
ProcedureName::Uninit => "/uninit".to_string(),
ProcedureName::StartMain => "/main/start".to_string(),
ProcedureName::StopMain => "/main/stop".to_string(),
ProcedureName::SetConfig => "/config/set".to_string(),
ProcedureName::GetConfig => "/config/get".to_string(),
ProcedureName::CreateBackup => "/backup/create".to_string(),
ProcedureName::RestoreBackup => "/backup/restore".to_string(),
ProcedureName::ActionMetadata => "/actions/metadata".to_string(),
ProcedureName::RunAction(id) => format!("/actions/{}/run", id),
ProcedureName::GetAction(id) => format!("/actions/{}/get", id),
ProcedureName::QueryDependency(id) => format!("/dependencies/{}/query", id),
ProcedureName::UpdateDependency(id) => format!("/dependencies/{}/update", id),
}
}
}

View File

@@ -1,11 +0,0 @@
[package]
name = "snapshot_creator"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
dashmap = "5.3.4"
deno_core = "=0.222.0"
deno_ast = { version = "=0.29.5", features = ["transpiling"] }

View File

@@ -1,11 +0,0 @@
use deno_core::JsRuntimeForSnapshot;
fn main() {
let runtime = JsRuntimeForSnapshot::new(Default::default());
let snapshot = runtime.snapshot();
let snapshot_slice: &[u8] = &*snapshot;
println!("Snapshot size: {}", snapshot_slice.len());
std::fs::write("JS_SNAPSHOT.bin", snapshot_slice).unwrap();
}

View File

@@ -21,20 +21,26 @@ license = "MIT"
name = "startos"
path = "src/lib.rs"
[[bin]]
name = "containerbox"
path = "src/main.rs"
[[bin]]
name = "start-cli"
path = "src/main.rs"
[[bin]]
name = "startbox"
path = "src/main.rs"
[features]
avahi = ["avahi-sys"]
avahi-alias = ["avahi"]
cli = []
container-runtime = []
daemon = []
default = ["cli", "sdk", "daemon"]
default = ["cli", "daemon"]
dev = []
docker = []
sdk = []
unstable = ["console-subscriber", "tokio/tracing"]
docker = []
[dependencies]
aes = { version = "0.7.5", features = ["ctr"] }
@@ -45,9 +51,8 @@ async-compression = { version = "0.4.4", features = [
] }
async-stream = "0.3.5"
async-trait = "0.1.74"
avahi-sys = { git = "https://github.com/Start9Labs/avahi-sys", version = "0.10.0", branch = "feature/dynamic-linking", features = [
"dynamic",
], optional = true }
axum = { version = "0.7.3", features = ["ws"] }
axum-server = "0.6.0"
base32 = "0.4.0"
base64 = "0.21.4"
base64ct = "1.6.0"
@@ -55,7 +60,7 @@ basic-cookies = "0.1.4"
blake3 = "1.5.0"
bytes = "1"
chrono = { version = "0.4.31", features = ["serde"] }
clap = "3.2.25"
clap = "4.4.12"
color-eyre = "0.6.2"
console = "0.15.7"
console-subscriber = { version = "0.2", optional = true }
@@ -72,7 +77,6 @@ ed25519-dalek = { version = "2.0.0", features = [
"digest",
] }
ed25519-dalek-v1 = { package = "ed25519-dalek", version = "1" }
container-init = { path = "../container-init" }
emver = { version = "0.1.7", git = "https://github.com/Start9Labs/emver-rs.git", features = [
"serde",
] }
@@ -82,9 +86,15 @@ gpt = "3.1.0"
helpers = { path = "../helpers" }
hex = "0.4.3"
hmac = "0.12.1"
http = "0.2.9"
hyper = { version = "0.14.27", features = ["full"] }
hyper-ws-listener = "0.3.0"
http = "1.0.0"
# http-body-util = "0.1.0"
# hyper = { version = "1.1.0", features = ["full"] }
# hyper-util = { version = "0.1.2", features = [
# "server",
# "server-auto",
# "tokio",
# ] }
# hyper-ws-listener = "0.3.0"
imbl = "2.0.2"
imbl-value = { git = "https://github.com/Start9Labs/imbl-value.git" }
include_dir = "0.7.3"
@@ -94,11 +104,13 @@ integer-encoding = { version = "4.0.0", features = ["tokio_async"] }
ipnet = { version = "2.8.0", features = ["serde"] }
iprange = { version = "0.6.7", features = ["serde"] }
isocountry = "0.3.2"
itertools = "0.11.0"
itertools = "0.12.0"
jaq-core = "0.10.1"
jaq-std = "0.10.0"
josekit = "0.8.4"
jsonpath_lib = { git = "https://github.com/Start9Labs/jsonpath.git" }
lazy_async_pool = "0.3.3"
lazy_format = "2.0"
lazy_static = "1.4.0"
libc = "0.2.149"
log = "0.4.20"
@@ -109,6 +121,7 @@ nix = { version = "0.27.1", features = ["user", "process", "signal", "fs"] }
nom = "7.1.3"
num = "0.4.1"
num_enum = "0.7.0"
once_cell = "1.19.0"
openssh-keys = "0.6.2"
openssl = { version = "0.10.57", features = ["vendored"] }
p256 = { version = "0.13.2", features = ["pem"] }
@@ -123,12 +136,12 @@ proptest = "1.3.1"
proptest-derive = "0.4.0"
rand = { version = "0.8.5", features = ["std"] }
regex = "1.10.2"
reqwest = { version = "0.11.22", features = ["stream", "json", "socks"] }
reqwest = { version = "0.11.23", features = ["stream", "json", "socks"] }
reqwest_cookie_store = "0.6.0"
rpassword = "7.2.0"
rpc-toolkit = "0.2.2"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "refactor/traits" }
rust-argon2 = "2.0.0"
scopeguard = "1.1" # because avahi-sys fucks your shit up
rustyline-async = "0.4.1"
semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" }
@@ -137,6 +150,7 @@ serde_toml = { package = "toml", version = "0.8.2" }
serde_with = { version = "3.4.0", features = ["macros", "json"] }
serde_yaml = "0.9.25"
sha2 = "0.10.2"
shell-words = "1"
simple-logging = "2.0.2"
sqlx = { version = "0.7.2", features = [
"chrono",
@@ -149,11 +163,11 @@ stderrlog = "0.5.4"
tar = "0.4.40"
thiserror = "1.0.49"
tokio = { version = "1", features = ["full"] }
tokio-rustls = "0.24.1"
tokio-rustls = "0.25.0"
tokio-socks = "0.5.1"
tokio-stream = { version = "0.1.14", features = ["io-util", "sync", "net"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = { version = "0.20.1", features = ["native-tls"] }
tokio-tungstenite = { version = "0.21.0", features = ["native-tls"] }
tokio-util = { version = "0.7.9", features = ["io"] }
torut = "0.2.1"
tracing = "0.1.39"
@@ -162,7 +176,7 @@ tracing-futures = "0.2.5"
tracing-journald = "0.3.0"
tracing-subscriber = { version = "0.3.17", features = ["env-filter"] }
trust-dns-server = "0.23.1"
typed-builder = "0.17.0"
typed-builder = "0.18.0"
url = { version = "2.4.1", features = ["serde"] }
urlencoding = "2.1.3"
uuid = { version = "1.4.1", features = ["v4"] }

View File

@@ -14,9 +14,15 @@ allow = [
"BSD-3-Clause",
"LGPL-3.0",
"OpenSSL",
"Unicode-DFS-2016",
"Zlib",
]
clarify = [
{ name = "webpki", expression = "ISC", license-files = [ { path = "LICENSE", hash = 0x001c7e6c } ] },
{ name = "ring", expression = "OpenSSL", license-files = [ { path = "LICENSE", hash = 0xbd0eed23 } ] },
{ name = "webpki", expression = "ISC", license-files = [
{ path = "LICENSE", hash = 0x001c7e6c },
] },
{ name = "ring", expression = "OpenSSL", license-files = [
{ path = "LICENSE", hash = 0xbd0eed23 },
] },
]

View File

@@ -1,26 +1,14 @@
use std::collections::{BTreeMap, BTreeSet};
use clap::ArgMatches;
use color_eyre::eyre::eyre;
use indexmap::IndexSet;
use clap::Parser;
pub use models::ActionId;
use models::ImageId;
use models::PackageId;
use rpc_toolkit::command;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::config::{Config, ConfigSpec};
use crate::config::Config;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
use crate::util::Version;
use crate::volume::Volumes;
use crate::{Error, ResultExt};
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct Actions(pub BTreeMap<ActionId, Action>);
use crate::util::serde::{display_serializable, StdinDeserializable, WithIoFormat};
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "version")]
@@ -44,72 +32,11 @@ pub enum DockerStatus {
Stopped,
}
#[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct Action {
pub name: String,
pub description: String,
#[serde(default)]
pub warning: Option<String>,
pub implementation: PackageProcedure,
pub allowed_statuses: IndexSet<DockerStatus>,
#[serde(default)]
pub input_spec: ConfigSpec,
}
impl Action {
#[instrument(skip_all)]
pub fn validate(
&self,
_container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.implementation
.validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| {
(
crate::ErrorKind::ValidateS9pk,
format!("Action {}", self.name),
)
})
pub fn display_action_result(params: WithIoFormat<ActionParams>, result: ActionResult) {
if let Some(format) = params.format {
return display_serializable(format, result);
}
#[instrument(skip_all)]
pub async fn execute(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
action_id: &ActionId,
volumes: &Volumes,
input: Option<Config>,
) -> Result<ActionResult, Error> {
if let Some(ref input) = input {
self.input_spec
.matches(&input)
.with_kind(crate::ErrorKind::ConfigSpecViolation)?;
}
self.implementation
.execute(
ctx,
pkg_id,
pkg_version,
ProcedureName::Action(action_id.clone()),
volumes,
input,
None,
)
.await?
.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::Action))
}
}
fn display_action_result(action_result: ActionResult, matches: &ArgMatches) {
if matches.is_present("format") {
return display_serializable(action_result, matches);
}
match action_result {
match result {
ActionResult::V0(ar) => {
println!(
"{}: {}",
@@ -120,44 +47,39 @@ fn display_action_result(action_result: ActionResult, matches: &ArgMatches) {
}
}
#[command(about = "Executes an action", display(display_action_result))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ActionParams {
#[arg(id = "id")]
#[serde(rename = "id")]
pub package_id: PackageId,
#[arg(id = "action-id")]
#[serde(rename = "action-id")]
pub action_id: ActionId,
#[command(flatten)]
pub input: StdinDeserializable<Option<Config>>,
}
// impl C
// #[command(about = "Executes an action", display(display_action_result))]
#[instrument(skip_all)]
pub async fn action(
#[context] ctx: RpcContext,
#[arg(rename = "id")] pkg_id: PackageId,
#[arg(rename = "action-id")] action_id: ActionId,
#[arg(stdin, parse(parse_stdin_deserializable))] input: Option<Config>,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
ctx: RpcContext,
ActionParams {
package_id,
action_id,
input: StdinDeserializable(input),
}: ActionParams,
) -> Result<ActionResult, Error> {
let manifest = ctx
.db
.peek()
ctx.services
.get(&package_id)
.await
.as_ref()
.or_not_found(lazy_format!("Manager for {}", package_id))?
.action(
action_id,
input.map(|c| to_value(&c)).transpose()?.unwrap_or_default(),
)
.await
.as_package_data()
.as_idx(&pkg_id)
.or_not_found(&pkg_id)?
.as_installed()
.or_not_found(&pkg_id)?
.as_manifest()
.de()?;
if let Some(action) = manifest.actions.0.get(&action_id) {
action
.execute(
&ctx,
&manifest.id,
&manifest.version,
&action_id,
&manifest.volumes,
input,
)
.await
} else {
Err(Error::new(
eyre!("Action not found in manifest"),
crate::ErrorKind::NotFound,
))
}
}

View File

@@ -1,24 +1,23 @@
use std::collections::BTreeMap;
use std::marker::PhantomData;
use chrono::{DateTime, Utc};
use clap::ArgMatches;
use clap::{ArgMatches, Parser};
use color_eyre::eyre::eyre;
use imbl_value::{json, InternedString};
use josekit::jwk::Jwk;
use rpc_toolkit::command;
use rpc_toolkit::command_helpers::prelude::{RequestParts, ResponseParts};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, from_fn_async, AnyContext, CallRemote, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use sqlx::{Executor, Postgres};
use tracing::instrument;
use crate::context::{CliContext, RpcContext};
use crate::middleware::auth::{AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken};
use crate::middleware::encrypt::EncryptedWire;
use crate::middleware::auth::{
AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken, LoginRes,
};
use crate::prelude::*;
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat};
use crate::util::crypto::EncryptedWire;
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat};
use crate::{ensure_code, Error, ResultExt};
#[derive(Clone, Serialize, Deserialize)]
#[serde(untagged)]
@@ -61,14 +60,43 @@ impl std::str::FromStr for PasswordType {
})
}
}
#[command(subcommands(login, logout, session, reset_password, get_pubkey))]
pub fn auth() -> Result<(), Error> {
Ok(())
pub fn auth() -> ParentHandler {
ParentHandler::new()
.subcommand(
"login",
from_fn_async(login_impl)
.with_metadata("login", Value::Bool(true))
.no_cli(),
)
.subcommand("login", from_fn_async(cli_login).no_display())
.subcommand(
"logout",
from_fn_async(logout)
.with_metadata("get-session", Value::Bool(true))
.with_remote_cli::<CliContext>()
// TODO @dr-bonez
.no_display(),
)
.subcommand("session", session())
.subcommand(
"reset-password",
from_fn_async(reset_password_impl).no_cli(),
)
.subcommand(
"reset-password",
from_fn_async(cli_reset_password).no_display(),
)
.subcommand(
"get-pubkey",
from_fn_async(get_pubkey)
.with_metadata("authenticated", Value::Bool(false))
.no_display()
.with_remote_cli::<CliContext>(),
)
}
pub fn cli_metadata() -> Value {
serde_json::json!({
imbl_value::json!({
"platforms": ["cli"],
})
}
@@ -89,12 +117,17 @@ fn gen_pwd() {
.unwrap()
)
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct CliLoginParams {
password: Option<PasswordType>,
}
#[instrument(skip_all)]
async fn cli_login(
ctx: CliContext,
password: Option<PasswordType>,
metadata: Value,
CliLoginParams { password }: CliLoginParams,
) -> Result<(), RpcError> {
let password = if let Some(password) = password {
password.decrypt(&ctx)?
@@ -102,14 +135,16 @@ async fn cli_login(
rpassword::prompt_password("Password: ")?
};
rpc_toolkit::command_helpers::call_remote(
ctx,
ctx.call_remote(
"auth.login",
serde_json::json!({ "password": password, "metadata": metadata }),
PhantomData::<()>,
json!({
"password": password,
"metadata": {
"platforms": ["cli"],
},
}),
)
.await?
.result?;
.await?;
Ok(())
}
@@ -140,30 +175,27 @@ where
Ok(())
}
#[command(
custom_cli(cli_login(async, context(CliContext))),
display(display_none),
metadata(authenticated = false)
)]
#[instrument(skip_all)]
pub async fn login(
#[context] ctx: RpcContext,
#[request] req: &RequestParts,
#[response] res: &mut ResponseParts,
#[arg] password: Option<PasswordType>,
#[arg(
parse(parse_metadata),
default = "cli_metadata",
help = "RPC Only: This value cannot be overidden from the cli"
)]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct LoginParams {
password: Option<PasswordType>,
#[arg(skip = cli_metadata())]
#[serde(default)]
metadata: Value,
) -> Result<(), Error> {
}
#[instrument(skip_all)]
pub async fn login_impl(
ctx: RpcContext,
LoginParams { password, metadata }: LoginParams,
) -> Result<LoginRes, Error> {
let password = password.unwrap_or_default().decrypt(&ctx)?;
let mut handle = ctx.secret_store.acquire().await?;
check_password_against_db(handle.as_mut(), &password).await?;
let hash_token = HashSessionToken::new();
let user_agent = req.headers.get("user-agent").and_then(|h| h.to_str().ok());
let user_agent = "".to_string(); // todo!() as String;
let metadata = serde_json::to_string(&metadata).with_kind(crate::ErrorKind::Database)?;
let hash_token_hashed = hash_token.hashed();
sqlx::query!(
@@ -174,25 +206,24 @@ pub async fn login(
)
.execute(handle.as_mut())
.await?;
res.headers.insert(
"set-cookie",
hash_token.header_value()?, // Should be impossible, but don't want to panic
);
Ok(())
Ok(hash_token.to_login_res())
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct LogoutParams {
session: InternedString,
}
#[command(display(display_none), metadata(authenticated = false))]
#[instrument(skip_all)]
pub async fn logout(
#[context] ctx: RpcContext,
#[request] req: &RequestParts,
ctx: RpcContext,
LogoutParams { session }: LogoutParams,
) -> Result<Option<HasLoggedOutSessions>, Error> {
let auth = match HashSessionToken::from_request_parts(req) {
Err(_) => return Ok(None),
Ok(a) => a,
};
Ok(Some(HasLoggedOutSessions::new(vec![auth], &ctx).await?))
Ok(Some(
HasLoggedOutSessions::new(vec![HashSessionToken::from_token(session)], &ctx).await?,
))
}
#[derive(Deserialize, Serialize)]
@@ -211,16 +242,31 @@ pub struct SessionList {
sessions: BTreeMap<String, Session>,
}
#[command(subcommands(list, kill))]
pub async fn session() -> Result<(), Error> {
Ok(())
pub fn session() -> ParentHandler {
ParentHandler::new()
.subcommand(
"list",
from_fn_async(list)
.with_metadata("get-session", Value::Bool(true))
.with_display_serializable()
.with_custom_display_fn::<AnyContext, _>(|handle, result| {
Ok(display_sessions(handle.params, result))
})
.with_remote_cli::<CliContext>(),
)
.subcommand(
"kill",
from_fn_async(kill)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
fn display_sessions(arg: SessionList, matches: &ArgMatches) {
fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) {
use prettytable::*;
if matches.is_present("format") {
return display_serializable(arg, matches);
if let Some(format) = params.format {
return display_serializable(format, arg);
}
let mut table = Table::new();
@@ -249,17 +295,22 @@ fn display_sessions(arg: SessionList, matches: &ArgMatches) {
table.print_tty(false).unwrap();
}
#[command(display(display_sessions))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ListParams {
#[arg(skip)]
session: InternedString,
}
// #[command(display(display_sessions))]
#[instrument(skip_all)]
pub async fn list(
#[context] ctx: RpcContext,
#[request] req: &RequestParts,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
ctx: RpcContext,
ListParams { session, .. }: ListParams,
) -> Result<SessionList, Error> {
Ok(SessionList {
current: HashSessionToken::from_request_parts(req)?.as_hash(),
current: HashSessionToken::from_token(session).hashed().to_owned(),
sessions: sqlx::query!(
"SELECT * FROM session WHERE logged_out IS NULL OR logged_out > CURRENT_TIMESTAMP"
)
@@ -287,29 +338,50 @@ fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<Vec<String>, RpcEr
}
#[derive(Debug, Clone, Serialize, Deserialize)]
struct KillSessionId(String);
struct KillSessionId(InternedString);
impl KillSessionId {
fn new(id: String) -> Self {
Self(InternedString::from(id))
}
}
impl AsLogoutSessionId for KillSessionId {
fn as_logout_session_id(self) -> String {
fn as_logout_session_id(self) -> InternedString {
self.0
}
}
#[command(display(display_none))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct KillParams {
ids: Vec<String>,
}
#[instrument(skip_all)]
pub async fn kill(
#[context] ctx: RpcContext,
#[arg(parse(parse_comma_separated))] ids: Vec<String>,
) -> Result<(), Error> {
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId), &ctx).await?;
pub async fn kill(ctx: RpcContext, KillParams { ids }: KillParams) -> Result<(), Error> {
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId::new), &ctx).await?;
Ok(())
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ResetPasswordParams {
#[arg(name = "old-password")]
old_password: Option<PasswordType>,
#[arg(name = "new-password")]
new_password: Option<PasswordType>,
}
#[instrument(skip_all)]
async fn cli_reset_password(
ctx: CliContext,
old_password: Option<PasswordType>,
new_password: Option<PasswordType>,
ResetPasswordParams {
old_password,
new_password,
}: ResetPasswordParams,
) -> Result<(), RpcError> {
let old_password = if let Some(old_password) = old_password {
old_password.decrypt(&ctx)?
@@ -331,28 +403,22 @@ async fn cli_reset_password(
new_password
};
rpc_toolkit::command_helpers::call_remote(
ctx,
ctx.call_remote(
"auth.reset-password",
serde_json::json!({ "old-password": old_password, "new-password": new_password }),
PhantomData::<()>,
imbl_value::json!({ "old-password": old_password, "new-password": new_password }),
)
.await?
.result?;
.await?;
Ok(())
}
#[command(
rename = "reset-password",
custom_cli(cli_reset_password(async, context(CliContext))),
display(display_none)
)]
#[instrument(skip_all)]
pub async fn reset_password(
#[context] ctx: RpcContext,
#[arg(rename = "old-password")] old_password: Option<PasswordType>,
#[arg(rename = "new-password")] new_password: Option<PasswordType>,
pub async fn reset_password_impl(
ctx: RpcContext,
ResetPasswordParams {
old_password,
new_password,
}: ResetPasswordParams,
) -> Result<(), Error> {
let old_password = old_password.unwrap_or_default().decrypt(&ctx)?;
let new_password = new_password.unwrap_or_default().decrypt(&ctx)?;
@@ -378,13 +444,8 @@ pub async fn reset_password(
.await
}
#[command(
rename = "get-pubkey",
display(display_none),
metadata(authenticated = false)
)]
#[instrument(skip_all)]
pub async fn get_pubkey(#[context] ctx: RpcContext) -> Result<Jwk, RpcError> {
pub async fn get_pubkey(ctx: RpcContext) -> Result<Jwk, RpcError> {
let secret = ctx.as_ref().clone();
let pub_key = secret.to_public_key()?;
Ok(pub_key)

View File

@@ -4,14 +4,13 @@ use std::path::{Path, PathBuf};
use std::sync::Arc;
use chrono::Utc;
use clap::ArgMatches;
use clap::Parser;
use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use imbl::OrdSet;
use models::Version;
use rpc_toolkit::command;
use models::PackageId;
use serde::{Deserialize, Serialize};
use tokio::io::AsyncWriteExt;
use tokio::sync::Mutex;
use tracing::instrument;
use super::target::BackupTargetId;
@@ -21,42 +20,37 @@ use crate::backup::os::OsBackup;
use crate::backup::{BackupReport, ServerBackupReport};
use crate::context::RpcContext;
use crate::db::model::BackupProgress;
use crate::db::package::get_packages;
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::TmpMountGuard;
use crate::manager::BackupReturn;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::notifications::NotificationLevel;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::util::display_none;
use crate::util::io::dir_copy;
use crate::util::serde::IoFormat;
use crate::version::VersionT;
fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<OrdSet<PackageId>, Error> {
arg.split(',')
.map(|s| s.trim().parse::<PackageId>().map_err(Error::from))
.collect()
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct BackupParams {
target_id: BackupTargetId,
#[arg(long = "old-password")]
old_password: Option<crate::auth::PasswordType>,
#[arg(long = "package-ids")]
package_ids: Option<Vec<PackageId>>,
password: crate::auth::PasswordType,
}
#[command(rename = "create", display(display_none))]
#[instrument(skip(ctx, old_password, password))]
pub async fn backup_all(
#[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg(rename = "old-password", long = "old-password")] old_password: Option<
crate::auth::PasswordType,
>,
#[arg(
rename = "package-ids",
long = "package-ids",
parse(parse_comma_separated)
)]
package_ids: Option<OrdSet<PackageId>>,
#[arg] password: crate::auth::PasswordType,
ctx: RpcContext,
BackupParams {
target_id,
old_password,
package_ids,
password,
}: BackupParams,
) -> Result<(), Error> {
let db = ctx.db.peek().await;
let old_password_decrypted = old_password
.as_ref()
.unwrap_or(&password)
@@ -73,20 +67,9 @@ pub async fn backup_all(
)
.await?;
let package_ids = if let Some(ids) = package_ids {
ids.into_iter()
.flat_map(|package_id| {
let version = db
.as_package_data()
.as_idx(&package_id)?
.as_manifest()
.as_version()
.de()
.ok()?;
Some((package_id, version))
})
.collect()
ids.into_iter().collect()
} else {
get_packages(db.clone())?.into_iter().collect()
todo!("all installed packages");
};
if old_password.is_some() {
backup_guard.change_password(&password)?;
@@ -108,10 +91,7 @@ pub async fn backup_all(
attempted: true,
error: None,
},
packages: report
.into_iter()
.map(|((package_id, _), value)| (package_id, value))
.collect(),
packages: report,
},
None,
)
@@ -130,10 +110,7 @@ pub async fn backup_all(
attempted: true,
error: None,
},
packages: report
.into_iter()
.map(|((package_id, _), value)| (package_id, value))
.collect(),
packages: report,
},
None,
)
@@ -178,7 +155,7 @@ pub async fn backup_all(
#[instrument(skip(db, packages))]
async fn assure_backing_up(
db: &PatchDb,
packages: impl IntoIterator<Item = &(PackageId, Version)> + UnwindSafe + Send,
packages: impl IntoIterator<Item = &PackageId> + UnwindSafe + Send,
) -> Result<(), Error> {
db.mutate(|v| {
let backing_up = v
@@ -205,7 +182,7 @@ async fn assure_backing_up(
backing_up.ser(&Some(
packages
.into_iter()
.map(|(x, _)| (x.clone(), BackupProgress { complete: false }))
.map(|x| (x.clone(), BackupProgress { complete: false }))
.collect(),
))?;
Ok(())
@@ -217,62 +194,39 @@ async fn assure_backing_up(
async fn perform_backup(
ctx: &RpcContext,
backup_guard: BackupMountGuard<TmpMountGuard>,
package_ids: &OrdSet<(PackageId, Version)>,
) -> Result<BTreeMap<(PackageId, Version), PackageBackupReport>, Error> {
package_ids: &OrdSet<PackageId>,
) -> Result<BTreeMap<PackageId, PackageBackupReport>, Error> {
let mut backup_report = BTreeMap::new();
let backup_guard = Arc::new(Mutex::new(backup_guard));
let backup_guard = Arc::new(backup_guard);
for package_id in package_ids {
let (response, _report) = match ctx
.managers
.get(package_id)
.await
.ok_or_else(|| Error::new(eyre!("Manager not found"), ErrorKind::InvalidRequest))?
.backup(backup_guard.clone())
.await
{
BackupReturn::Ran { report, res } => (res, report),
BackupReturn::AlreadyRunning(report) => {
backup_report.insert(package_id.clone(), report);
continue;
}
BackupReturn::Error(error) => {
tracing::warn!("Backup thread error");
tracing::debug!("{error:?}");
backup_report.insert(
package_id.clone(),
PackageBackupReport {
error: Some("Backup thread error".to_owned()),
},
);
continue;
}
};
backup_report.insert(
package_id.clone(),
PackageBackupReport {
error: response.as_ref().err().map(|e| e.to_string()),
},
);
if let Ok(pkg_meta) = response {
backup_guard
.lock()
.await
.metadata
.package_backups
.insert(package_id.0.clone(), pkg_meta);
for id in package_ids {
if let Some(service) = &*ctx.services.get(id).await {
backup_report.insert(
id.clone(),
PackageBackupReport {
error: service
.backup(backup_guard.package_backup(id))
.await
.err()
.map(|e| e.to_string()),
},
);
}
}
let mut backup_guard = Arc::try_unwrap(backup_guard).map_err(|_| {
Error::new(
eyre!("leaked reference to BackupMountGuard"),
ErrorKind::Incoherent,
)
})?;
let ui = ctx.db.peek().await.into_ui().de()?;
let mut os_backup_file = AtomicFile::new(
backup_guard.lock().await.as_ref().join("os-backup.cbor"),
None::<PathBuf>,
)
.await
.with_kind(ErrorKind::Filesystem)?;
let mut os_backup_file =
AtomicFile::new(backup_guard.path().join("os-backup.cbor"), None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
os_backup_file
.write_all(&IoFormat::Cbor.to_vec(&OsBackup {
account: ctx.account.read().await.clone(),
@@ -284,11 +238,11 @@ async fn perform_backup(
.await
.with_kind(ErrorKind::Filesystem)?;
let luks_folder_old = backup_guard.lock().await.as_ref().join("luks.old");
let luks_folder_old = backup_guard.path().join("luks.old");
if tokio::fs::metadata(&luks_folder_old).await.is_ok() {
tokio::fs::remove_dir_all(&luks_folder_old).await?;
}
let luks_folder_bak = backup_guard.lock().await.as_ref().join("luks");
let luks_folder_bak = backup_guard.path().join("luks");
if tokio::fs::metadata(&luks_folder_bak).await.is_ok() {
tokio::fs::rename(&luks_folder_bak, &luks_folder_old).await?;
}
@@ -298,14 +252,6 @@ async fn perform_backup(
}
let timestamp = Some(Utc::now());
let mut backup_guard = Arc::try_unwrap(backup_guard)
.map_err(|_err| {
Error::new(
eyre!("Backup guard could not ensure that the others where dropped"),
ErrorKind::Unknown,
)
})?
.into_inner();
backup_guard.unencrypted_metadata.version = crate::version::Current::new().semver().into();
backup_guard.unencrypted_metadata.full = true;

View File

@@ -1,33 +1,16 @@
use std::collections::{BTreeMap, BTreeSet};
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::collections::BTreeMap;
use chrono::{DateTime, Utc};
use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use models::{ImageId, OptionExt};
use models::PackageId;
use reqwest::Url;
use rpc_toolkit::command;
use rpc_toolkit::{from_fn_async, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use tokio::fs::File;
use tokio::io::AsyncWriteExt;
use tracing::instrument;
use self::target::PackageBackupInfo;
use crate::context::RpcContext;
use crate::install::PKG_ARCHIVE_DIR;
use crate::manager::manager_seed::ManagerSeed;
use crate::context::CliContext;
use crate::net::interface::InterfaceId;
use crate::net::keys::Key;
#[allow(unused_imports)]
use crate::prelude::*;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{Base32, Base64, IoFormat};
use crate::util::Version;
use crate::version::{Current, VersionT};
use crate::volume::{backup_dir, Volume, VolumeId, Volumes, BACKUP_DIR};
use crate::{Error, ErrorKind, ResultExt};
use crate::util::serde::{Base32, Base64};
pub mod backup_bulk;
pub mod os;
@@ -51,14 +34,16 @@ pub struct PackageBackupReport {
pub error: Option<String>,
}
#[command(subcommands(backup_bulk::backup_all, target::target))]
pub fn backup() -> Result<(), Error> {
Ok(())
}
#[command(rename = "backup", subcommands(restore::restore_packages_rpc))]
pub fn package_backup() -> Result<(), Error> {
Ok(())
// #[command(subcommands(backup_bulk::backup_all, target::target))]
pub fn backup() -> ParentHandler {
ParentHandler::new()
.subcommand(
"create",
from_fn_async(backup_bulk::backup_all)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand("target", target::target())
}
#[derive(Deserialize, Serialize)]
@@ -70,157 +55,3 @@ struct BackupMetadata {
pub tor_keys: BTreeMap<InterfaceId, Base32<[u8; 64]>>, // DEPRECATED
pub marketplace_url: Option<Url>,
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
pub struct BackupActions {
pub create: PackageProcedure,
pub restore: PackageProcedure,
}
impl BackupActions {
pub fn validate(
&self,
_container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.create
.validate(eos_version, volumes, image_ids, false)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Backup Create"))?;
self.restore
.validate(eos_version, volumes, image_ids, false)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Backup Restore"))?;
Ok(())
}
#[instrument(skip_all)]
pub async fn create(&self, seed: Arc<ManagerSeed>) -> Result<PackageBackupInfo, Error> {
let manifest = &seed.manifest;
let mut volumes = seed.manifest.volumes.to_readonly();
let ctx = &seed.ctx;
let pkg_id = &manifest.id;
let pkg_version = &manifest.version;
volumes.insert(VolumeId::Backup, Volume::Backup { readonly: false });
let backup_dir = backup_dir(&manifest.id);
if tokio::fs::metadata(&backup_dir).await.is_err() {
tokio::fs::create_dir_all(&backup_dir).await?
}
self.create
.execute::<(), NoOutput>(
ctx,
pkg_id,
pkg_version,
ProcedureName::CreateBackup,
&volumes,
None,
None,
)
.await?
.map_err(|e| eyre!("{}", e.1))
.with_kind(crate::ErrorKind::Backup)?;
let (network_keys, tor_keys): (Vec<_>, Vec<_>) =
Key::for_package(&ctx.secret_store, pkg_id)
.await?
.into_iter()
.filter_map(|k| {
let interface = k.interface().map(|(_, i)| i)?;
Some((
(interface.clone(), Base64(k.as_bytes())),
(interface, Base32(k.tor_key().as_bytes())),
))
})
.unzip();
let marketplace_url = ctx
.db
.peek()
.await
.as_package_data()
.as_idx(&pkg_id)
.or_not_found(pkg_id)?
.expect_as_installed()?
.as_installed()
.as_marketplace_url()
.de()?;
let tmp_path = Path::new(BACKUP_DIR)
.join(pkg_id)
.join(format!("{}.s9pk", pkg_id));
let s9pk_path = ctx
.datadir
.join(PKG_ARCHIVE_DIR)
.join(pkg_id)
.join(pkg_version.as_str())
.join(format!("{}.s9pk", pkg_id));
let mut infile = File::open(&s9pk_path).await?;
let mut outfile = AtomicFile::new(&tmp_path, None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
tokio::io::copy(&mut infile, &mut *outfile)
.await
.with_ctx(|_| {
(
crate::ErrorKind::Filesystem,
format!("cp {} -> {}", s9pk_path.display(), tmp_path.display()),
)
})?;
outfile.save().await.with_kind(ErrorKind::Filesystem)?;
let timestamp = Utc::now();
let metadata_path = Path::new(BACKUP_DIR).join(pkg_id).join("metadata.cbor");
let mut outfile = AtomicFile::new(&metadata_path, None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
let network_keys = network_keys.into_iter().collect();
let tor_keys = tor_keys.into_iter().collect();
outfile
.write_all(&IoFormat::Cbor.to_vec(&BackupMetadata {
timestamp,
network_keys,
tor_keys,
marketplace_url,
})?)
.await?;
outfile.save().await.with_kind(ErrorKind::Filesystem)?;
Ok(PackageBackupInfo {
os_version: Current::new().semver().into(),
title: manifest.title.clone(),
version: pkg_version.clone(),
timestamp,
})
}
#[instrument(skip_all)]
pub async fn restore(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
volumes: &Volumes,
) -> Result<Option<Url>, Error> {
let mut volumes = volumes.clone();
volumes.insert(VolumeId::Backup, Volume::Backup { readonly: true });
self.restore
.execute::<(), NoOutput>(
ctx,
pkg_id,
pkg_version,
ProcedureName::RestoreBackup,
&volumes,
None,
None,
)
.await?
.map_err(|e| eyre!("{}", e.1))
.with_kind(crate::ErrorKind::Restore)?;
let metadata_path = Path::new(BACKUP_DIR).join(pkg_id).join("metadata.cbor");
let metadata: BackupMetadata = IoFormat::Cbor.from_slice(
&tokio::fs::read(&metadata_path).await.with_ctx(|_| {
(
crate::ErrorKind::Filesystem,
metadata_path.display().to_string(),
)
})?,
)?;
Ok(metadata.marketplace_url)
}
}

View File

@@ -1,55 +1,46 @@
use std::collections::BTreeMap;
use std::path::Path;
use std::sync::atomic::Ordering;
use std::sync::Arc;
use std::time::Duration;
use clap::ArgMatches;
use futures::future::BoxFuture;
use futures::{stream, FutureExt, StreamExt};
use clap::Parser;
use futures::{stream, StreamExt};
use models::PackageId;
use openssl::x509::X509;
use rpc_toolkit::command;
use sqlx::Connection;
use tokio::fs::File;
use serde::{Deserialize, Serialize};
use torut::onion::OnionAddressV3;
use tracing::instrument;
use super::target::BackupTargetId;
use crate::backup::os::OsBackup;
use crate::backup::BackupMetadata;
use crate::context::rpc::RpcContextConfig;
use crate::context::{RpcContext, SetupContext};
use crate::db::model::{PackageDataEntry, PackageDataEntryRestoring, StaticFiles};
use crate::disk::mount::backup::{BackupMountGuard, PackageBackupMountGuard};
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::hostname::Hostname;
use crate::init::init;
use crate::install::progress::InstallProgress;
use crate::install::{download_install_s9pk, PKG_PUBLIC_DIR};
use crate::notifications::NotificationLevel;
use crate::prelude::*;
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::s9pk::reader::S9pkReader;
use crate::setup::SetupStatus;
use crate::util::display_none;
use crate::util::io::dir_size;
use crate::s9pk::S9pk;
use crate::service::service_map::DownloadInstallFuture;
use crate::util::serde::IoFormat;
use crate::volume::{backup_dir, BACKUP_DIR, PKG_VOLUME_DIR};
fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<Vec<PackageId>, Error> {
arg.split(',')
.map(|s| s.trim().parse().map_err(Error::from))
.collect()
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct RestorePackageParams {
pub ids: Vec<PackageId>,
pub target_id: BackupTargetId,
pub password: String,
}
#[command(rename = "restore", display(display_none))]
// TODO dr Why doesn't anything use this
// #[command(rename = "restore", display(display_none))]
#[instrument(skip(ctx, password))]
pub async fn restore_packages_rpc(
#[context] ctx: RpcContext,
#[arg(parse(parse_comma_separated))] ids: Vec<PackageId>,
#[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg] password: String,
ctx: RpcContext,
RestorePackageParams {
ids,
target_id,
password,
}: RestorePackageParams,
) -> Result<(), Error> {
let fs = target_id
.load(ctx.secret_store.acquire().await?.as_mut())
@@ -57,114 +48,25 @@ pub async fn restore_packages_rpc(
let backup_guard =
BackupMountGuard::mount(TmpMountGuard::mount(&fs, ReadWrite).await?, &password).await?;
let (backup_guard, tasks, _) = restore_packages(&ctx, backup_guard, ids).await?;
let tasks = restore_packages(&ctx, backup_guard, ids).await?;
tokio::spawn(async move {
stream::iter(tasks.into_iter().map(|x| (x, ctx.clone())))
.for_each_concurrent(5, |(res, ctx)| async move {
match res.await {
(Ok(_), _) => (),
(Err(err), package_id) => {
if let Err(err) = ctx
.notification_manager
.notify(
ctx.db.clone(),
Some(package_id.clone()),
NotificationLevel::Error,
"Restoration Failure".to_string(),
format!("Error restoring package {}: {}", package_id, err),
(),
None,
)
.await
{
tracing::error!("Failed to notify: {}", err);
tracing::debug!("{:?}", err);
};
tracing::error!("Error restoring package {}: {}", package_id, err);
stream::iter(tasks)
.for_each_concurrent(5, |(id, res)| async move {
match async { res.await?.await }.await {
Ok(_) => (),
Err(err) => {
tracing::error!("Error restoring package {}: {}", id, err);
tracing::debug!("{:?}", err);
}
}
})
.await;
if let Err(e) = backup_guard.unmount().await {
tracing::error!("Error unmounting backup drive: {}", e);
tracing::debug!("{:?}", e);
}
});
Ok(())
}
async fn approximate_progress(
rpc_ctx: &RpcContext,
progress: &mut ProgressInfo,
) -> Result<(), Error> {
for (id, size) in &mut progress.target_volume_size {
let dir = rpc_ctx.datadir.join(PKG_VOLUME_DIR).join(id).join("data");
if tokio::fs::metadata(&dir).await.is_err() {
*size = 0;
} else {
*size = dir_size(&dir, None).await?;
}
}
Ok(())
}
async fn approximate_progress_loop(
ctx: &SetupContext,
rpc_ctx: &RpcContext,
mut starting_info: ProgressInfo,
) {
loop {
if let Err(e) = approximate_progress(rpc_ctx, &mut starting_info).await {
tracing::error!("Failed to approximate restore progress: {}", e);
tracing::debug!("{:?}", e);
} else {
*ctx.setup_status.write().await = Some(Ok(starting_info.flatten()));
}
tokio::time::sleep(Duration::from_secs(1)).await;
}
}
#[derive(Debug, Default)]
struct ProgressInfo {
package_installs: BTreeMap<PackageId, Arc<InstallProgress>>,
src_volume_size: BTreeMap<PackageId, u64>,
target_volume_size: BTreeMap<PackageId, u64>,
}
impl ProgressInfo {
fn flatten(&self) -> SetupStatus {
let mut total_bytes = 0;
let mut bytes_transferred = 0;
for progress in self.package_installs.values() {
total_bytes += ((progress.size.unwrap_or(0) as f64) * 2.2) as u64;
bytes_transferred += progress.downloaded.load(Ordering::SeqCst);
bytes_transferred += ((progress.validated.load(Ordering::SeqCst) as f64) * 0.2) as u64;
bytes_transferred += progress.unpacked.load(Ordering::SeqCst);
}
for size in self.src_volume_size.values() {
total_bytes += *size;
}
for size in self.target_volume_size.values() {
bytes_transferred += *size;
}
if bytes_transferred > total_bytes {
bytes_transferred = total_bytes;
}
SetupStatus {
total_bytes: Some(total_bytes),
bytes_transferred,
complete: false,
}
}
}
#[instrument(skip(ctx))]
pub async fn recover_full_embassy(
ctx: SetupContext,
@@ -179,7 +81,7 @@ pub async fn recover_full_embassy(
)
.await?;
let os_backup_path = backup_guard.as_ref().join("os-backup.cbor");
let os_backup_path = backup_guard.path().join("os-backup.cbor");
let mut os_backup: OsBackup = IoFormat::Cbor.from_slice(
&tokio::fs::read(&os_backup_path)
.await
@@ -199,11 +101,9 @@ pub async fn recover_full_embassy(
secret_store.close().await;
let cfg = RpcContextConfig::load(ctx.config_path.clone()).await?;
init(&ctx.config).await?;
init(&cfg).await?;
let rpc_ctx = RpcContext::init(ctx.config_path.clone(), disk_guid.clone()).await?;
let rpc_ctx = RpcContext::init(&ctx.config, disk_guid.clone()).await?;
let ids: Vec<_> = backup_guard
.metadata
@@ -211,37 +111,19 @@ pub async fn recover_full_embassy(
.keys()
.cloned()
.collect();
let (backup_guard, tasks, progress_info) =
restore_packages(&rpc_ctx, backup_guard, ids).await?;
let task_consumer_rpc_ctx = rpc_ctx.clone();
tokio::select! {
_ = async move {
stream::iter(tasks.into_iter().map(|x| (x, task_consumer_rpc_ctx.clone())))
.for_each_concurrent(5, |(res, ctx)| async move {
match res.await {
(Ok(_), _) => (),
(Err(err), package_id) => {
if let Err(err) = ctx.notification_manager.notify(
ctx.db.clone(),
Some(package_id.clone()),
NotificationLevel::Error,
"Restoration Failure".to_string(), format!("Error restoring package {}: {}", package_id,err), (), None).await{
tracing::error!("Failed to notify: {}", err);
tracing::debug!("{:?}", err);
};
tracing::error!("Error restoring package {}: {}", package_id, err);
tracing::debug!("{:?}", err);
},
}
}).await;
let tasks = restore_packages(&rpc_ctx, backup_guard, ids).await?;
stream::iter(tasks)
.for_each_concurrent(5, |(id, res)| async move {
match async { res.await?.await }.await {
Ok(_) => (),
Err(err) => {
tracing::error!("Error restoring package {}: {}", id, err);
tracing::debug!("{:?}", err);
}
}
})
.await;
} => {
},
_ = approximate_progress_loop(&ctx, &rpc_ctx, progress_info) => unreachable!(concat!(module_path!(), "::approximate_progress_loop should not terminate")),
}
backup_guard.unmount().await?;
rpc_ctx.shutdown().await?;
Ok((
@@ -257,205 +139,25 @@ async fn restore_packages(
ctx: &RpcContext,
backup_guard: BackupMountGuard<TmpMountGuard>,
ids: Vec<PackageId>,
) -> Result<
(
BackupMountGuard<TmpMountGuard>,
Vec<BoxFuture<'static, (Result<(), Error>, PackageId)>>,
ProgressInfo,
),
Error,
> {
let guards = assure_restoring(ctx, ids, &backup_guard).await?;
let mut progress_info = ProgressInfo::default();
let mut tasks = Vec::with_capacity(guards.len());
for (manifest, guard) in guards {
let id = manifest.id.clone();
let (progress, task) = restore_package(ctx.clone(), manifest, guard).await?;
progress_info
.package_installs
.insert(id.clone(), progress.clone());
progress_info
.src_volume_size
.insert(id.clone(), dir_size(backup_dir(&id), None).await?);
progress_info.target_volume_size.insert(id.clone(), 0);
let package_id = id.clone();
tasks.push(
async move {
if let Err(e) = task.await {
tracing::error!("Error restoring package {}: {}", id, e);
tracing::debug!("{:?}", e);
Err(e)
} else {
Ok(())
}
}
.map(|x| (x, package_id))
.boxed(),
);
}
Ok((backup_guard, tasks, progress_info))
}
#[instrument(skip(ctx, backup_guard))]
async fn assure_restoring(
ctx: &RpcContext,
ids: Vec<PackageId>,
backup_guard: &BackupMountGuard<TmpMountGuard>,
) -> Result<Vec<(Manifest, PackageBackupMountGuard)>, Error> {
let mut guards = Vec::with_capacity(ids.len());
let mut insert_packages = BTreeMap::new();
) -> Result<BTreeMap<PackageId, DownloadInstallFuture>, Error> {
let backup_guard = Arc::new(backup_guard);
let mut tasks = BTreeMap::new();
for id in ids {
let peek = ctx.db.peek().await;
let model = peek.as_package_data().as_idx(&id);
if !model.is_none() {
return Err(Error::new(
eyre!("Can't restore over existing package: {}", id),
crate::ErrorKind::InvalidRequest,
));
}
let guard = backup_guard.mount_package_backup(&id).await?;
let s9pk_path = Path::new(BACKUP_DIR).join(&id).join(format!("{}.s9pk", id));
let mut rdr = S9pkReader::open(&s9pk_path, false).await?;
let manifest = rdr.manifest().await?;
let version = manifest.version.clone();
let progress = Arc::new(InstallProgress::new(Some(
tokio::fs::metadata(&s9pk_path).await?.len(),
)));
let public_dir_path = ctx
.datadir
.join(PKG_PUBLIC_DIR)
.join(&id)
.join(version.as_str());
tokio::fs::create_dir_all(&public_dir_path).await?;
let license_path = public_dir_path.join("LICENSE.md");
let mut dst = File::create(&license_path).await?;
tokio::io::copy(&mut rdr.license().await?, &mut dst).await?;
dst.sync_all().await?;
let instructions_path = public_dir_path.join("INSTRUCTIONS.md");
let mut dst = File::create(&instructions_path).await?;
tokio::io::copy(&mut rdr.instructions().await?, &mut dst).await?;
dst.sync_all().await?;
let icon_path = Path::new("icon").with_extension(&manifest.assets.icon_type());
let icon_path = public_dir_path.join(&icon_path);
let mut dst = File::create(&icon_path).await?;
tokio::io::copy(&mut rdr.icon().await?, &mut dst).await?;
dst.sync_all().await?;
insert_packages.insert(
id.clone(),
PackageDataEntry::Restoring(PackageDataEntryRestoring {
install_progress: progress.clone(),
static_files: StaticFiles::local(&id, &version, manifest.assets.icon_type()),
manifest: manifest.clone(),
}),
);
guards.push((manifest, guard));
}
ctx.db
.mutate(|db| {
for (id, package) in insert_packages {
db.as_package_data_mut().insert(&id, &package)?;
}
Ok(())
})
.await?;
Ok(guards)
}
#[instrument(skip(ctx, guard))]
async fn restore_package<'a>(
ctx: RpcContext,
manifest: Manifest,
guard: PackageBackupMountGuard,
) -> Result<(Arc<InstallProgress>, BoxFuture<'static, Result<(), Error>>), Error> {
let id = manifest.id.clone();
let s9pk_path = Path::new(BACKUP_DIR)
.join(&manifest.id)
.join(format!("{}.s9pk", id));
let metadata_path = Path::new(BACKUP_DIR).join(&id).join("metadata.cbor");
let metadata: BackupMetadata = IoFormat::Cbor.from_slice(
&tokio::fs::read(&metadata_path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, metadata_path.display().to_string()))?,
)?;
let mut secrets = ctx.secret_store.acquire().await?;
let mut secrets_tx = secrets.begin().await?;
for (iface, key) in metadata.network_keys {
let k = key.0.as_slice();
sqlx::query!(
"INSERT INTO network_keys (package, interface, key) VALUES ($1, $2, $3) ON CONFLICT (package, interface) DO NOTHING",
id.to_string(),
iface.to_string(),
k,
)
.execute(secrets_tx.as_mut()).await?;
}
// DEPRECATED
for (iface, key) in metadata.tor_keys {
let k = key.0.as_slice();
sqlx::query!(
"INSERT INTO tor (package, interface, key) VALUES ($1, $2, $3) ON CONFLICT (package, interface) DO NOTHING",
id.to_string(),
iface.to_string(),
k,
)
.execute(secrets_tx.as_mut()).await?;
}
secrets_tx.commit().await?;
drop(secrets);
let len = tokio::fs::metadata(&s9pk_path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, s9pk_path.display().to_string()))?
.len();
let file = File::open(&s9pk_path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, s9pk_path.display().to_string()))?;
let progress = InstallProgress::new(Some(len));
let marketplace_url = metadata.marketplace_url;
let progress = Arc::new(progress);
ctx.db
.mutate(|db| {
db.as_package_data_mut().insert(
&id,
&PackageDataEntry::Restoring(PackageDataEntryRestoring {
install_progress: progress.clone(),
static_files: StaticFiles::local(
&id,
&manifest.version,
manifest.assets.icon_type(),
),
manifest: manifest.clone(),
}),
let backup_dir = backup_guard.clone().package_backup(&id);
let task = ctx
.services
.install(
ctx.clone(),
S9pk::open(
backup_dir.path().join(&id).with_extension("s9pk"),
Some(&id),
)
.await?,
Some(backup_dir),
)
})
.await?;
Ok((
progress.clone(),
async move {
download_install_s9pk(ctx, manifest, marketplace_url, progress, file, None).await?;
.await?;
tasks.insert(id, task);
}
guard.unmount().await?;
Ok(())
}
.boxed(),
))
Ok(tasks)
}

View File

@@ -1,19 +1,19 @@
use std::path::{Path, PathBuf};
use clap::Parser;
use color_eyre::eyre::eyre;
use futures::TryStreamExt;
use rpc_toolkit::command;
use rpc_toolkit::{command, from_fn_async, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use sqlx::{Executor, Postgres};
use super::{BackupTarget, BackupTargetId};
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::filesystem::ReadOnly;
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::{recovery_info, EmbassyOsRecoveryInfo};
use crate::prelude::*;
use crate::util::display_none;
use crate::util::serde::KeyVal;
#[derive(Debug, Deserialize, Serialize)]
@@ -26,18 +26,46 @@ pub struct CifsBackupTarget {
embassy_os: Option<EmbassyOsRecoveryInfo>,
}
#[command(subcommands(add, update, remove))]
pub fn cifs() -> Result<(), Error> {
Ok(())
pub fn cifs() -> ParentHandler {
ParentHandler::new()
.subcommand(
"add",
from_fn_async(add)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"update",
from_fn_async(update)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"remove",
from_fn_async(remove)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct AddParams {
pub hostname: String,
pub path: PathBuf,
pub username: String,
pub password: Option<String>,
}
#[command(display(display_none))]
pub async fn add(
#[context] ctx: RpcContext,
#[arg] hostname: String,
#[arg] path: PathBuf,
#[arg] username: String,
#[arg] password: Option<String>,
ctx: RpcContext,
AddParams {
hostname,
path,
username,
password,
}: AddParams,
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
let cifs = Cifs {
hostname,
@@ -46,7 +74,7 @@ pub async fn add(
password,
};
let guard = TmpMountGuard::mount(&cifs, ReadOnly).await?;
let embassy_os = recovery_info(&guard).await?;
let embassy_os = recovery_info(guard.path()).await?;
guard.unmount().await?;
let path_string = Path::new("/").join(&cifs.path).display().to_string();
let id: i32 = sqlx::query!(
@@ -70,14 +98,26 @@ pub async fn add(
})
}
#[command(display(display_none))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct UpdateParams {
pub id: BackupTargetId,
pub hostname: String,
pub path: PathBuf,
pub username: String,
pub password: Option<String>,
}
pub async fn update(
#[context] ctx: RpcContext,
#[arg] id: BackupTargetId,
#[arg] hostname: String,
#[arg] path: PathBuf,
#[arg] username: String,
#[arg] password: Option<String>,
ctx: RpcContext,
UpdateParams {
id,
hostname,
path,
username,
password,
}: UpdateParams,
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
let id = if let BackupTargetId::Cifs { id } = id {
id
@@ -94,7 +134,7 @@ pub async fn update(
password,
};
let guard = TmpMountGuard::mount(&cifs, ReadOnly).await?;
let embassy_os = recovery_info(&guard).await?;
let embassy_os = recovery_info(guard.path()).await?;
guard.unmount().await?;
let path_string = Path::new("/").join(&cifs.path).display().to_string();
if sqlx::query!(
@@ -127,8 +167,14 @@ pub async fn update(
})
}
#[command(display(display_none))]
pub async fn remove(#[context] ctx: RpcContext, #[arg] id: BackupTargetId) -> Result<(), Error> {
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct RemoveParams {
pub id: BackupTargetId,
}
pub async fn remove(ctx: RpcContext, RemoveParams { id }: RemoveParams) -> Result<(), Error> {
let id = if let BackupTargetId::Cifs { id } = id {
id
} else {
@@ -189,7 +235,7 @@ where
};
let embassy_os = async {
let guard = TmpMountGuard::mount(&mount_info, ReadOnly).await?;
let embassy_os = recovery_info(&guard).await?;
let embassy_os = recovery_info(guard.path()).await?;
guard.unmount().await?;
Ok::<_, Error>(embassy_os)
}

View File

@@ -1,13 +1,14 @@
use std::collections::BTreeMap;
use std::path::{Path, PathBuf};
use async_trait::async_trait;
use chrono::{DateTime, Utc};
use clap::ArgMatches;
use clap::builder::ValueParserFactory;
use clap::Parser;
use color_eyre::eyre::eyre;
use digest::generic_array::GenericArray;
use digest::OutputSizeUser;
use rpc_toolkit::command;
use models::PackageId;
use rpc_toolkit::{command, from_fn_async, AnyContext, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use sqlx::{Executor, Postgres};
@@ -15,17 +16,19 @@ use tokio::sync::Mutex;
use tracing::instrument;
use self::cifs::CifsBackupTarget;
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::filesystem::{FileSystem, MountType, ReadWrite};
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::PartitionInfo;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{deserialize_from_str, display_serializable, serialize_display};
use crate::util::{display_none, Version};
use crate::util::clap::FromStrParser;
use crate::util::serde::{
deserialize_from_str, display_serializable, serialize_display, HandlerExtSerde, WithIoFormat,
};
use crate::util::Version;
pub mod cifs;
@@ -84,6 +87,12 @@ impl std::str::FromStr for BackupTargetId {
}
}
}
impl ValueParserFactory for BackupTargetId {
type Parser = FromStrParser<Self>;
fn value_parser() -> Self::Parser {
FromStrParser::new()
}
}
impl<'de> Deserialize<'de> for BackupTargetId {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
@@ -108,9 +117,8 @@ pub enum BackupTargetFS {
Disk(BlockDev<PathBuf>),
Cifs(Cifs),
}
#[async_trait]
impl FileSystem for BackupTargetFS {
async fn mount<P: AsRef<Path> + Send + Sync>(
async fn mount<P: AsRef<Path> + Send>(
&self,
mountpoint: P,
mount_type: MountType,
@@ -130,15 +138,29 @@ impl FileSystem for BackupTargetFS {
}
}
#[command(subcommands(cifs::cifs, list, info, mount, umount))]
pub fn target() -> Result<(), Error> {
Ok(())
// #[command(subcommands(cifs::cifs, list, info, mount, umount))]
pub fn target() -> ParentHandler {
ParentHandler::new()
.subcommand("cifs", cifs::cifs())
.subcommand(
"list",
from_fn_async(list)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"info",
from_fn_async(info)
.with_display_serializable()
.with_custom_display_fn::<AnyContext, _>(|params, info| {
Ok(display_backup_info(params.params, info))
})
.with_remote_cli::<CliContext>(),
)
}
#[command(display(display_serializable))]
pub async fn list(
#[context] ctx: RpcContext,
) -> Result<BTreeMap<BackupTargetId, BackupTarget>, Error> {
// #[command(display(display_serializable))]
pub async fn list(ctx: RpcContext) -> Result<BTreeMap<BackupTargetId, BackupTarget>, Error> {
let mut sql_handle = ctx.secret_store.acquire().await?;
let (disks_res, cifs) = tokio::try_join!(
crate::disk::util::list(&ctx.os_partitions),
@@ -187,11 +209,11 @@ pub struct PackageBackupInfo {
pub timestamp: DateTime<Utc>,
}
fn display_backup_info(info: BackupInfo, matches: &ArgMatches) {
fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) {
use prettytable::*;
if matches.is_present("format") {
return display_serializable(info, matches);
if let Some(format) = params.format {
return display_serializable(format, info);
}
let mut table = Table::new();
@@ -223,12 +245,21 @@ fn display_backup_info(info: BackupInfo, matches: &ArgMatches) {
table.print_tty(false).unwrap();
}
#[command(display(display_backup_info))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct InfoParams {
target_id: BackupTargetId,
password: String,
}
#[instrument(skip(ctx, password))]
pub async fn info(
#[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg] password: String,
ctx: RpcContext,
InfoParams {
target_id,
password,
}: InfoParams,
) -> Result<BackupInfo, Error> {
let guard = BackupMountGuard::mount(
TmpMountGuard::mount(
@@ -254,17 +285,26 @@ lazy_static::lazy_static! {
Mutex::new(BTreeMap::new());
}
#[command]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct MountParams {
target_id: BackupTargetId,
password: String,
}
#[instrument(skip_all)]
pub async fn mount(
#[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg] password: String,
ctx: RpcContext,
MountParams {
target_id,
password,
}: MountParams,
) -> Result<String, Error> {
let mut mounts = USER_MOUNTS.lock().await;
if let Some(existing) = mounts.get(&target_id) {
return Ok(existing.as_ref().display().to_string());
return Ok(existing.path().display().to_string());
}
let guard = BackupMountGuard::mount(
@@ -280,19 +320,23 @@ pub async fn mount(
)
.await?;
let res = guard.as_ref().display().to_string();
let res = guard.path().display().to_string();
mounts.insert(target_id, guard);
Ok(res)
}
#[command(display(display_none))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct UmountParams {
target_id: Option<BackupTargetId>,
}
#[instrument(skip_all)]
pub async fn umount(
#[context] _ctx: RpcContext,
#[arg(rename = "target-id")] target_id: Option<BackupTargetId>,
) -> Result<(), Error> {
let mut mounts = USER_MOUNTS.lock().await;
pub async fn umount(_: RpcContext, UmountParams { target_id }: UmountParams) -> Result<(), Error> {
let mut mounts = USER_MOUNTS.lock().await; // TODO: move to context
if let Some(target_id) = target_id {
if let Some(existing) = mounts.remove(&target_id) {
existing.unmount().await?;

View File

@@ -1,163 +0,0 @@
use avahi_sys::{
self, avahi_client_errno, avahi_entry_group_add_service, avahi_entry_group_commit,
avahi_strerror, AvahiClient,
};
fn log_str_error(action: &str, e: i32) {
unsafe {
let e_str = avahi_strerror(e);
eprintln!(
"Could not {}: {:?}",
action,
std::ffi::CStr::from_ptr(e_str)
);
}
}
pub fn main() {
let aliases: Vec<_> = std::env::args().skip(1).collect();
unsafe {
let simple_poll = avahi_sys::avahi_simple_poll_new();
let poll = avahi_sys::avahi_simple_poll_get(simple_poll);
let mut box_err = Box::pin(0 as i32);
let err_c: *mut i32 = box_err.as_mut().get_mut();
let avahi_client = avahi_sys::avahi_client_new(
poll,
avahi_sys::AvahiClientFlags::AVAHI_CLIENT_NO_FAIL,
Some(client_callback),
std::ptr::null_mut(),
err_c,
);
if avahi_client == std::ptr::null_mut::<AvahiClient>() {
log_str_error("create Avahi client", *box_err);
panic!("Failed to create Avahi Client");
}
let group = avahi_sys::avahi_entry_group_new(
avahi_client,
Some(entry_group_callback),
std::ptr::null_mut(),
);
if group == std::ptr::null_mut() {
log_str_error("create Avahi entry group", avahi_client_errno(avahi_client));
panic!("Failed to create Avahi Entry Group");
}
let mut hostname_buf = vec![0];
let hostname_raw = avahi_sys::avahi_client_get_host_name_fqdn(avahi_client);
hostname_buf.extend_from_slice(std::ffi::CStr::from_ptr(hostname_raw).to_bytes_with_nul());
let buflen = hostname_buf.len();
debug_assert!(hostname_buf.ends_with(b".local\0"));
debug_assert!(!hostname_buf[..(buflen - 7)].contains(&b'.'));
// assume fixed length prefix on hostname due to local address
hostname_buf[0] = (buflen - 8) as u8; // set the prefix length to len - 8 (leading byte, .local, nul) for the main address
hostname_buf[buflen - 7] = 5; // set the prefix length to 5 for "local"
let mut res;
let http_tcp_cstr =
std::ffi::CString::new("_http._tcp").expect("Could not cast _http._tcp to c string");
res = avahi_entry_group_add_service(
group,
avahi_sys::AVAHI_IF_UNSPEC,
avahi_sys::AVAHI_PROTO_UNSPEC,
avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_USE_MULTICAST,
hostname_raw,
http_tcp_cstr.as_ptr(),
std::ptr::null(),
std::ptr::null(),
443,
// below is a secret final argument that the type signature of this function does not tell you that it
// needs. This is because the C lib function takes a variable number of final arguments indicating the
// desired TXT records to add to this service entry. The way it decides when to stop taking arguments
// from the stack and dereferencing them is when it finds a null pointer...because fuck you, that's why.
// The consequence of this is that forgetting this last argument will cause segfaults or other undefined
// behavior. Welcome back to the stone age motherfucker.
std::ptr::null::<libc::c_char>(),
);
if res < avahi_sys::AVAHI_OK {
log_str_error("add service to Avahi entry group", res);
panic!("Failed to load Avahi services");
}
eprintln!("Published {:?}", std::ffi::CStr::from_ptr(hostname_raw));
for alias in aliases {
let lan_address = alias + ".local";
let lan_address_ptr = std::ffi::CString::new(lan_address)
.expect("Could not cast lan address to c string");
res = avahi_sys::avahi_entry_group_add_record(
group,
avahi_sys::AVAHI_IF_UNSPEC,
avahi_sys::AVAHI_PROTO_UNSPEC,
avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_USE_MULTICAST
| avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_ALLOW_MULTIPLE,
lan_address_ptr.as_ptr(),
avahi_sys::AVAHI_DNS_CLASS_IN as u16,
avahi_sys::AVAHI_DNS_TYPE_CNAME as u16,
avahi_sys::AVAHI_DEFAULT_TTL,
hostname_buf.as_ptr().cast(),
hostname_buf.len(),
);
if res < avahi_sys::AVAHI_OK {
log_str_error("add CNAME record to Avahi entry group", res);
panic!("Failed to load Avahi services");
}
eprintln!("Published {:?}", lan_address_ptr);
}
let commit_err = avahi_entry_group_commit(group);
if commit_err < avahi_sys::AVAHI_OK {
log_str_error("reset Avahi entry group", commit_err);
panic!("Failed to load Avahi services: reset");
}
}
std::thread::park()
}
unsafe extern "C" fn entry_group_callback(
_group: *mut avahi_sys::AvahiEntryGroup,
state: avahi_sys::AvahiEntryGroupState,
_userdata: *mut core::ffi::c_void,
) {
match state {
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_FAILURE => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_FAILURE");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_COLLISION => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_COLLISION");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_UNCOMMITED => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_UNCOMMITED");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_ESTABLISHED => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_ESTABLISHED");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_REGISTERING => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_REGISTERING");
}
other => {
eprintln!("AvahiCallback: EntryGroupState = {}", other);
}
}
}
unsafe extern "C" fn client_callback(
_group: *mut avahi_sys::AvahiClient,
state: avahi_sys::AvahiClientState,
_userdata: *mut core::ffi::c_void,
) {
match state {
avahi_sys::AvahiClientState_AVAHI_CLIENT_FAILURE => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_FAILURE");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_RUNNING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_RUNNING");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_CONNECTING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_CONNECTING");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_COLLISION => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_COLLISION");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_REGISTERING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_REGISTERING");
}
other => {
eprintln!("AvahiCallback: ClientState = {}", other);
}
}
}

View File

@@ -0,0 +1,38 @@
use std::ffi::OsString;
use rpc_toolkit::CliApp;
use serde_json::Value;
use crate::service::cli::{ContainerCliContext, ContainerClientConfig};
use crate::util::logger::EmbassyLogger;
use crate::version::{Current, VersionT};
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
pub fn main(args: impl IntoIterator<Item = OsString>) {
EmbassyLogger::init();
if let Err(e) = CliApp::new(
|cfg: ContainerClientConfig| Ok(ContainerCliContext::init(cfg)),
crate::service::service_effect_handler::service_effect_handler(),
)
.run(args)
{
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => {
if let Some(Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
}

View File

@@ -1,45 +1,54 @@
use std::collections::VecDeque;
use std::ffi::OsString;
use std::path::Path;
#[cfg(feature = "avahi-alias")]
pub mod avahi_alias;
#[cfg(feature = "container-runtime")]
pub mod container_cli;
pub mod deprecated;
#[cfg(feature = "cli")]
pub mod start_cli;
#[cfg(feature = "daemon")]
pub mod start_init;
#[cfg(feature = "sdk")]
pub mod start_sdk;
#[cfg(feature = "daemon")]
pub mod startd;
fn select_executable(name: &str) -> Option<fn()> {
fn select_executable(name: &str) -> Option<fn(VecDeque<OsString>)> {
match name {
#[cfg(feature = "avahi-alias")]
"avahi-alias" => Some(avahi_alias::main),
#[cfg(feature = "cli")]
"start-cli" => Some(start_cli::main),
#[cfg(feature = "sdk")]
"start-sdk" => Some(start_sdk::main),
#[cfg(feature = "container-runtime")]
"start-cli" => Some(container_cli::main),
#[cfg(feature = "daemon")]
"startd" => Some(startd::main),
"embassy-cli" => Some(|| deprecated::renamed("embassy-cli", "start-cli")),
"embassy-sdk" => Some(|| deprecated::renamed("embassy-sdk", "start-sdk")),
"embassyd" => Some(|| deprecated::renamed("embassyd", "startd")),
"embassy-init" => Some(|| deprecated::removed("embassy-init")),
"embassy-cli" => Some(|_| deprecated::renamed("embassy-cli", "start-cli")),
"embassy-sdk" => Some(|_| deprecated::renamed("embassy-sdk", "start-sdk")),
"embassyd" => Some(|_| deprecated::renamed("embassyd", "startd")),
"embassy-init" => Some(|_| deprecated::removed("embassy-init")),
_ => None,
}
}
pub fn startbox() {
let args = std::env::args().take(2).collect::<Vec<_>>();
let executable = args
.get(0)
.and_then(|s| Path::new(&*s).file_name())
.and_then(|s| s.to_str());
if let Some(x) = executable.and_then(|s| select_executable(&s)) {
x()
} else {
eprintln!("unknown executable: {}", executable.unwrap_or("N/A"));
std::process::exit(1);
let mut args = std::env::args_os().collect::<VecDeque<_>>();
for _ in 0..2 {
if let Some(s) = args.pop_front() {
if let Some(x) = Path::new(&*s)
.file_name()
.and_then(|s| s.to_str())
.and_then(|s| select_executable(&s))
{
args.push_front(s);
return x(args);
}
}
}
let args = std::env::args().collect::<VecDeque<_>>();
eprintln!(
"unknown executable: {}",
args.get(1)
.or_else(|| args.get(0))
.map(|s| s.as_str())
.unwrap_or("N/A")
);
std::process::exit(1);
}

View File

@@ -1,62 +1,39 @@
use clap::Arg;
use rpc_toolkit::run_cli;
use rpc_toolkit::yajrc::RpcError;
use std::ffi::OsString;
use rpc_toolkit::CliApp;
use serde_json::Value;
use crate::context::config::ClientConfig;
use crate::context::CliContext;
use crate::util::logger::EmbassyLogger;
use crate::version::{Current, VersionT};
use crate::Error;
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
fn inner_main() -> Result<(), Error> {
run_cli!({
command: crate::main_api,
app: app => app
.name("StartOS CLI")
.version(&**VERSION_STRING)
.arg(
clap::Arg::with_name("config")
.short('c')
.long("config")
.takes_value(true),
)
.arg(Arg::with_name("host").long("host").short('h').takes_value(true))
.arg(Arg::with_name("proxy").long("proxy").short('p').takes_value(true)),
context: matches => {
EmbassyLogger::init();
CliContext::init(matches)?
},
exit: |e: RpcError| {
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => if let Some(Value::String(s)) = o.get("details") {
pub fn main(args: impl IntoIterator<Item = OsString>) {
EmbassyLogger::init();
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::main_api(),
)
.run(args)
{
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => {
if let Some(Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
});
Ok(())
}
pub fn main() {
match inner_main() {
Ok(_) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
std::process::exit(e.code);
}
}

View File

@@ -1,142 +0,0 @@
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, run_cli, Context};
use serde_json::Value;
use crate::procedure::js_scripts::ExecuteArgs;
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
use crate::version::{Current, VersionT};
use crate::Error;
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
struct DenoContext;
impl Context for DenoContext {}
#[command(subcommands(execute, sandbox))]
fn deno_api() -> Result<(), Error> {
Ok(())
}
#[command(cli_only, display(display_serializable))]
async fn execute(
#[arg(stdin, parse(parse_stdin_deserializable))] arg: ExecuteArgs,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<Result<Value, (i32, String)>, Error> {
let ExecuteArgs {
procedure,
directory,
pkg_id,
pkg_version,
name,
volumes,
input,
} = arg;
PackageLogger::init(&pkg_id);
// procedure
// .execute_impl(&directory, &pkg_id, &pkg_version, name, &volumes, input)
// .await
todo!("@DRB Remove")
}
#[command(cli_only, display(display_serializable))]
async fn sandbox(
#[arg(stdin, parse(parse_stdin_deserializable))] arg: ExecuteArgs,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<Result<Value, (i32, String)>, Error> {
let ExecuteArgs {
procedure,
directory,
pkg_id,
pkg_version,
name,
volumes,
input,
} = arg;
PackageLogger::init(&pkg_id);
// procedure
// .sandboxed_impl(&directory, &pkg_id, &pkg_version, &volumes, input, name)
// .await
todo!("@DRB Remove")
}
use tracing::Subscriber;
use tracing_subscriber::util::SubscriberInitExt;
#[derive(Clone)]
struct PackageLogger {}
impl PackageLogger {
fn base_subscriber(id: &PackageId) -> impl Subscriber {
use tracing_error::ErrorLayer;
use tracing_subscriber::prelude::*;
use tracing_subscriber::{fmt, EnvFilter};
let filter_layer = EnvFilter::default().add_directive(
format!("{}=warn", std::module_path!().split("::").next().unwrap())
.parse()
.unwrap(),
);
let fmt_layer = fmt::layer().with_writer(std::io::stderr).with_target(true);
let journald_layer = tracing_journald::layer()
.unwrap()
.with_syslog_identifier(format!("{id}.embassy"));
let sub = tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.with(journald_layer)
.with(ErrorLayer::default());
sub
}
pub fn init(id: &PackageId) -> Self {
Self::base_subscriber(id).init();
color_eyre::install().unwrap_or_else(|_| tracing::warn!("tracing too many times"));
Self {}
}
}
fn inner_main() -> Result<(), Error> {
run_cli!({
command: deno_api,
app: app => app
.name("StartOS Deno Executor")
.version(&**VERSION_STRING),
context: _m => DenoContext,
exit: |e: RpcError| {
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => if let Some(Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
});
Ok(())
}
pub fn main() {
match inner_main() {
Ok(_) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
}
}

View File

@@ -1,5 +1,5 @@
use std::net::{Ipv6Addr, SocketAddr};
use std::path::{Path, PathBuf};
use std::path::Path;
use std::sync::Arc;
use std::time::Duration;
@@ -7,7 +7,7 @@ use helpers::NonDetachingJoinHandle;
use tokio::process::Command;
use tracing::instrument;
use crate::context::rpc::RpcContextConfig;
use crate::context::config::ServerConfig;
use crate::context::{DiagnosticContext, InstallContext, SetupContext};
use crate::disk::fsck::{RepairStrategy, RequiresReboot};
use crate::disk::main::DEFAULT_PASSWORD;
@@ -21,7 +21,7 @@ use crate::util::Invoke;
use crate::{Error, ErrorKind, ResultExt, PLATFORM};
#[instrument(skip_all)]
async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
async fn setup_or_init(config: &ServerConfig) -> Result<Option<Shutdown>, Error> {
let song = NonDetachingJoinHandle::from(tokio::spawn(async {
loop {
BEP.play().await.unwrap();
@@ -82,13 +82,12 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Er
.invoke(crate::ErrorKind::OpenSsh)
.await?;
let ctx = InstallContext::init(cfg_path).await?;
let ctx = InstallContext::init().await?;
let server = WebServer::install(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
ctx.clone(),
)
.await?;
)?;
drop(song);
tokio::time::sleep(Duration::from_secs(1)).await; // let the record state that I hate this
@@ -109,26 +108,24 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Er
.await
.is_err()
{
let ctx = SetupContext::init(cfg_path).await?;
let ctx = SetupContext::init(config)?;
let server = WebServer::setup(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
ctx.clone(),
)
.await?;
)?;
drop(song);
tokio::time::sleep(Duration::from_secs(1)).await; // let the record state that I hate this
CHIME.play().await?;
ctx.shutdown
.subscribe()
.recv()
.await
.expect("context dropped");
let mut shutdown = ctx.shutdown.subscribe();
shutdown.recv().await.expect("context dropped");
server.shutdown().await;
drop(shutdown);
tokio::task::yield_now().await;
if let Err(e) = Command::new("killall")
.arg("firefox-esr")
@@ -139,13 +136,12 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Er
tracing::debug!("{:?}", e);
}
} else {
let cfg = RpcContextConfig::load(cfg_path).await?;
let guid_string = tokio::fs::read_to_string("/media/embassy/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await?;
let guid = guid_string.trim();
let requires_reboot = crate::disk::main::import(
guid,
cfg.datadir(),
config.datadir(),
if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() {
RepairStrategy::Aggressive
} else {
@@ -164,13 +160,13 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Er
.with_ctx(|_| (crate::ErrorKind::Filesystem, REPAIR_DISK_PATH))?;
}
if requires_reboot.0 {
crate::disk::main::export(guid, cfg.datadir()).await?;
crate::disk::main::export(guid, config.datadir()).await?;
Command::new("reboot")
.invoke(crate::ErrorKind::Unknown)
.await?;
}
tracing::info!("Loaded Disk");
crate::init::init(&cfg).await?;
crate::init::init(config).await?;
drop(song);
}
@@ -196,7 +192,7 @@ async fn run_script_if_exists<P: AsRef<Path>>(path: P) {
}
#[instrument(skip_all)]
async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
async fn inner_main(config: &ServerConfig) -> Result<Option<Shutdown>, Error> {
if &*PLATFORM == "raspberrypi" && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() {
tokio::fs::remove_file(STANDBY_MODE_PATH).await?;
Command::new("sync").invoke(ErrorKind::Filesystem).await?;
@@ -208,7 +204,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
run_script_if_exists("/media/embassy/config/preinit.sh").await;
let res = match setup_or_init(cfg_path.clone()).await {
let res = match setup_or_init(config).await {
Err(e) => {
async move {
tracing::error!("{}", e.source);
@@ -216,7 +212,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
crate::sound::BEETHOVEN.play().await?;
let ctx = DiagnosticContext::init(
cfg_path,
config,
if tokio::fs::metadata("/media/embassy/config/disk.guid")
.await
.is_ok()
@@ -231,14 +227,12 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
None
},
e,
)
.await?;
)?;
let server = WebServer::diagnostic(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
ctx.clone(),
)
.await?;
)?;
let shutdown = ctx.shutdown.subscribe().recv().await.unwrap();
@@ -256,23 +250,13 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
res
}
pub fn main() {
let matches = clap::App::new("start-init")
.arg(
clap::Arg::with_name("config")
.short('c')
.long("config")
.takes_value(true),
)
.get_matches();
let cfg_path = matches.value_of("config").map(|p| Path::new(p).to_owned());
pub fn main(config: &ServerConfig) {
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("failed to initialize runtime");
rt.block_on(inner_main(cfg_path))
rt.block_on(inner_main(config))
};
match res {

View File

@@ -1,61 +0,0 @@
use rpc_toolkit::run_cli;
use rpc_toolkit::yajrc::RpcError;
use serde_json::Value;
use crate::context::SdkContext;
use crate::util::logger::EmbassyLogger;
use crate::version::{Current, VersionT};
use crate::Error;
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
fn inner_main() -> Result<(), Error> {
run_cli!({
command: crate::portable_api,
app: app => app
.name("StartOS SDK")
.version(&**VERSION_STRING)
.arg(
clap::Arg::with_name("config")
.short('c')
.long("config")
.takes_value(true),
),
context: matches => {
if let Err(_) = std::env::var("RUST_LOG") {
std::env::set_var("RUST_LOG", "embassy=warn,js_engine=warn");
}
EmbassyLogger::init();
SdkContext::init(matches)?
},
exit: |e: RpcError| {
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => if let Some(Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
});
Ok(())
}
pub fn main() {
match inner_main() {
Ok(_) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
}
}

View File

@@ -1,12 +1,15 @@
use std::ffi::OsString;
use std::net::{Ipv6Addr, SocketAddr};
use std::path::{Path, PathBuf};
use std::path::Path;
use std::sync::Arc;
use clap::Parser;
use color_eyre::eyre::eyre;
use futures::{FutureExt, TryFutureExt};
use tokio::signal::unix::signal;
use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::context::{DiagnosticContext, RpcContext};
use crate::net::web_server::WebServer;
use crate::shutdown::Shutdown;
@@ -15,10 +18,10 @@ use crate::util::logger::EmbassyLogger;
use crate::{Error, ErrorKind, ResultExt};
#[instrument(skip_all)]
async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
async fn inner_main(config: &ServerConfig) -> Result<Option<Shutdown>, Error> {
let (rpc_ctx, server, shutdown) = async {
let rpc_ctx = RpcContext::init(
cfg_path,
config,
Arc::new(
tokio::fs::read_to_string("/media/embassy/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await?
@@ -31,8 +34,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
let server = WebServer::main(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
rpc_ctx.clone(),
)
.await?;
)?;
let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
@@ -102,32 +104,23 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
Ok(shutdown)
}
pub fn main() {
pub fn main(args: impl IntoIterator<Item = OsString>) {
EmbassyLogger::init();
let config = ServerConfig::parse_from(args).load().unwrap();
if !Path::new("/run/embassy/initialized").exists() {
super::start_init::main();
super::start_init::main(&config);
std::fs::write("/run/embassy/initialized", "").unwrap();
}
let matches = clap::App::new("startd")
.arg(
clap::Arg::with_name("config")
.short('c')
.long("config")
.takes_value(true),
)
.get_matches();
let cfg_path = matches.value_of("config").map(|p| Path::new(p).to_owned());
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("failed to initialize runtime");
rt.block_on(async {
match inner_main(cfg_path.clone()).await {
match inner_main(&config).await {
Ok(a) => Ok(a),
Err(e) => {
async {
@@ -135,7 +128,7 @@ pub fn main() {
tracing::debug!("{:?}", e.source);
crate::sound::BEETHOVEN.play().await?;
let ctx = DiagnosticContext::init(
cfg_path,
&config,
if tokio::fs::metadata("/media/embassy/config/disk.guid")
.await
.is_ok()
@@ -150,14 +143,12 @@ pub fn main() {
None
},
e,
)
.await?;
)?;
let server = WebServer::diagnostic(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
ctx.clone(),
)
.await?;
)?;
let mut shutdown = ctx.shutdown.subscribe();

View File

@@ -1,22 +1,12 @@
use std::collections::{BTreeMap, BTreeSet};
use color_eyre::eyre::eyre;
use models::ImageId;
use patch_db::HasModel;
use models::PackageId;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use super::{Config, ConfigSpec};
use crate::context::RpcContext;
use crate::dependencies::Dependencies;
#[allow(unused_imports)]
use crate::prelude::*;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::status::health_check::HealthCheckId;
use crate::util::Version;
use crate::volume::Volumes;
use crate::{Error, ResultExt};
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
@@ -25,90 +15,6 @@ pub struct ConfigRes {
pub spec: ConfigSpec,
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
pub struct ConfigActions {
pub get: PackageProcedure,
pub set: PackageProcedure,
}
impl ConfigActions {
#[instrument(skip_all)]
pub fn validate(
&self,
_container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.get
.validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Get"))?;
self.set
.validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Set"))?;
Ok(())
}
#[instrument(skip_all)]
pub async fn get(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
volumes: &Volumes,
) -> Result<ConfigRes, Error> {
self.get
.execute(
ctx,
pkg_id,
pkg_version,
ProcedureName::GetConfig,
volumes,
None::<()>,
None,
)
.await
.and_then(|res| {
res.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::ConfigGen))
})
}
#[instrument(skip_all)]
pub async fn set(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
dependencies: &Dependencies,
volumes: &Volumes,
input: &Config,
) -> Result<SetResult, Error> {
let res: SetResult = self
.set
.execute(
ctx,
pkg_id,
pkg_version,
ProcedureName::SetConfig,
volumes,
Some(input),
None,
)
.await
.and_then(|res| {
res.map_err(|e| {
Error::new(eyre!("{}", e.1), crate::ErrorKind::ConfigRulesViolation)
})
})?;
Ok(SetResult {
depends_on: res
.depends_on
.into_iter()
.filter(|(pkg, _)| dependencies.0.contains_key(pkg))
.collect(),
})
}
}
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct SetResult {

View File

@@ -1,24 +1,22 @@
use std::collections::BTreeMap;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
use clap::Parser;
use color_eyre::eyre::eyre;
use indexmap::IndexSet;
use itertools::Itertools;
use models::{ErrorKind, OptionExt};
use models::{ErrorKind, OptionExt, PackageId};
use patch_db::value::InternedString;
use patch_db::Value;
use regex::Regex;
use rpc_toolkit::command;
use rpc_toolkit::{from_fn_async, Empty, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::util::display_none;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
use crate::Error;
use crate::util::serde::{HandlerExtSerde, StdinDeserializable};
pub mod action;
pub mod spec;
@@ -132,96 +130,107 @@ pub enum MatchError {
ListUniquenessViolation,
}
#[command(rename = "config-spec", cli_only, blocking, display(display_none))]
pub fn verify_spec(#[arg] path: PathBuf) -> Result<(), Error> {
let mut file = std::fs::File::open(&path)?;
let format = match path.extension().and_then(|s| s.to_str()) {
Some("yaml") | Some("yml") => IoFormat::Yaml,
Some("json") => IoFormat::Json,
Some("toml") => IoFormat::Toml,
Some("cbor") => IoFormat::Cbor,
_ => {
return Err(Error::new(
eyre!("Unknown file format. Expected one of yaml, json, toml, cbor."),
crate::ErrorKind::Deserialization,
));
}
};
let _: ConfigSpec = format.from_reader(&mut file)?;
Ok(())
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ConfigParams {
pub id: PackageId,
}
#[command(subcommands(get, set))]
pub fn config(#[arg] id: PackageId) -> Result<PackageId, Error> {
Ok(id)
// #[command(subcommands(get, set))]
pub fn config() -> ParentHandler<ConfigParams> {
ParentHandler::new()
.subcommand(
"get",
from_fn_async(get)
.with_inherited(|ConfigParams { id }, _| id)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
.subcommand("set", set().with_inherited(|ConfigParams { id }, _| id))
}
#[command(display(display_serializable))]
#[instrument(skip_all)]
pub async fn get(
#[context] ctx: RpcContext,
#[parent_data] id: PackageId,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<ConfigRes, Error> {
let db = ctx.db.peek().await;
let manifest = db
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.as_installed()
.or_not_found(&id)?
.as_manifest();
let action = manifest
.as_config()
.de()?
.ok_or_else(|| Error::new(eyre!("{} has no config", id), crate::ErrorKind::NotFound))?;
let volumes = manifest.as_volumes().de()?;
let version = manifest.as_version().de()?;
action.get(&ctx, &id, &version, &volumes).await
pub async fn get(ctx: RpcContext, _: Empty, id: PackageId) -> Result<ConfigRes, Error> {
ctx.services
.get(&id)
.await
.as_ref()
.or_not_found(lazy_format!("Manager for {id}"))?
.get_config()
.await
}
#[command(
subcommands(self(set_impl(async, context(RpcContext))), set_dry),
display(display_none),
metadata(sync_db = true)
)]
#[instrument(skip_all)]
pub fn set(
#[parent_data] id: PackageId,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
#[arg(long = "timeout")] timeout: Option<crate::util::serde::Duration>,
#[arg(stdin, parse(parse_stdin_deserializable))] config: Option<Config>,
) -> Result<(PackageId, Option<Config>, Option<Duration>), Error> {
Ok((id, config, timeout.map(|d| *d)))
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
pub struct SetParams {
#[arg(long = "timeout")]
pub timeout: Option<crate::util::serde::Duration>,
#[command(flatten)]
pub config: StdinDeserializable<Option<Config>>,
}
#[command(rename = "dry", display(display_serializable))]
// TODO Dr Why isn't this used?
// #[command(
// subcommands(self(set_impl(async, context(RpcContext))), set_dry),
// display(display_none),
// metadata(sync_db = true)
// )]
#[instrument(skip_all)]
pub fn set() -> ParentHandler<SetParams, PackageId> {
ParentHandler::new()
.root_handler(
from_fn_async(set_impl)
.with_metadata("sync_db", Value::Bool(true))
.with_inherited(|set_params, id| (id, set_params))
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"dry",
from_fn_async(set_dry)
.with_inherited(|set_params, id| (id, set_params))
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
}
pub async fn set_dry(
#[context] ctx: RpcContext,
#[parent_data] (id, config, timeout): (PackageId, Option<Config>, Option<Duration>),
ctx: RpcContext,
_: Empty,
(
id,
SetParams {
timeout,
config: StdinDeserializable(config),
},
): (PackageId, SetParams),
) -> Result<BTreeMap<PackageId, String>, Error> {
let breakages = BTreeMap::new();
let overrides = Default::default();
let configure_context = ConfigureContext {
breakages,
timeout,
timeout: timeout.map(|t| *t),
config,
dry_run: true,
overrides,
};
let breakages = configure(&ctx, &id, configure_context).await?;
Ok(breakages)
ctx.services
.get(&id)
.await
.as_ref()
.ok_or_else(|| {
Error::new(
eyre!("There is no manager running for {id}"),
ErrorKind::Unknown,
)
})?
.configure(configure_context)
.await
}
#[derive(Default)]
pub struct ConfigureContext {
pub breakages: BTreeMap<PackageId, String>,
pub timeout: Option<Duration>,
@@ -233,55 +242,36 @@ pub struct ConfigureContext {
#[instrument(skip_all)]
pub async fn set_impl(
ctx: RpcContext,
(id, config, timeout): (PackageId, Option<Config>, Option<Duration>),
_: Empty,
(
id,
SetParams {
timeout,
config: StdinDeserializable(config),
},
): (PackageId, SetParams),
) -> Result<(), Error> {
let breakages = BTreeMap::new();
let overrides = Default::default();
let configure_context = ConfigureContext {
breakages,
timeout,
timeout: timeout.map(|t| *t),
config,
dry_run: false,
overrides,
};
configure(&ctx, &id, configure_context).await?;
Ok(())
}
#[instrument(skip_all)]
pub async fn configure(
ctx: &RpcContext,
id: &PackageId,
configure_context: ConfigureContext,
) -> Result<BTreeMap<PackageId, String>, Error> {
let db = ctx.db.peek().await;
let package = db
.as_package_data()
.as_idx(id)
.or_not_found(&id)?
.as_installed()
.or_not_found(&id)?;
let version = package.as_manifest().as_version().de()?;
ctx.managers
.get(&(id.clone(), version.clone()))
ctx.services
.get(&id)
.await
.as_ref()
.ok_or_else(|| {
Error::new(
eyre!("There is no manager running for {id:?} and {version:?}"),
eyre!("There is no manager running for {id}"),
ErrorKind::Unknown,
)
})?
.configure(configure_context)
.await
.await?;
Ok(())
}
macro_rules! not_found {
($x:expr) => {
crate::Error::new(
color_eyre::eyre::eyre!("Could not find {} at {}:{}", $x, module_path!(), line!()),
crate::ErrorKind::Incoherent,
)
};
}
pub(crate) use not_found;

View File

@@ -14,6 +14,7 @@ use imbl_value::InternedString;
use indexmap::{IndexMap, IndexSet};
use itertools::Itertools;
use jsonpath_lib::Compiled as CompiledJsonPath;
use models::ProcedureName;
use patch_db::value::{Number, Value};
use rand::{CryptoRng, Rng};
use regex::Regex;
@@ -23,6 +24,7 @@ use sqlx::PgPool;
use super::util::{self, CharSet, NumRange, UniqueBy, STATIC_NULL};
use super::{Config, MatchError, NoMatchWithPath, TimeoutError, TypeOf};
use crate::config::action::ConfigRes;
use crate::config::ConfigurationError;
use crate::context::RpcContext;
use crate::net::interface::InterfaceId;
@@ -1773,27 +1775,27 @@ impl ConfigPointer {
Ok(self.select(&Value::Object(cfg.clone())))
} else {
let id = &self.package_id;
let db = ctx.db.peek().await;
let manifest = db.as_package_data().as_idx(id).map(|pde| pde.as_manifest());
let cfg_actions = manifest.and_then(|m| m.as_config().transpose_ref());
if let (Some(manifest), Some(cfg_actions)) = (manifest, cfg_actions) {
let cfg_res = cfg_actions
.de()
.map_err(|e| ConfigurationError::SystemError(e))?
.get(
ctx,
&self.package_id,
&manifest
.as_version()
.de()
.map_err(|e| ConfigurationError::SystemError(e))?,
&manifest
.as_volumes()
.de()
.map_err(|e| ConfigurationError::SystemError(e))?,
)
let version = ctx
.db
.peek()
.await
.as_package_data()
.as_idx(id)
.and_then(|pde| pde.as_installed())
.map(|i| i.as_manifest().as_version().de())
.transpose()
.map_err(ConfigurationError::SystemError)?;
if let Some(version) = version {
let cfg_res = ctx
.services
.get(&id)
.await
.map_err(|e| ConfigurationError::SystemError(e))?;
.as_ref()
.or_not_found(lazy_format!("Manager for {id}@{version}"))
.map_err(|e| ConfigurationError::SystemError(e))?
.get_config()
.await
.map_err(ConfigurationError::SystemError)?;
if let Some(cfg) = cfg_res.config {
Ok(self.select(&Value::Object(cfg)))
} else {

View File

@@ -1,43 +1,37 @@
use std::fs::File;
use std::io::BufReader;
use std::net::Ipv4Addr;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use clap::ArgMatches;
use color_eyre::eyre::eyre;
use cookie_store::{CookieStore, RawCookie};
use josekit::jwk::Jwk;
use once_cell::sync::OnceCell;
use reqwest::Proxy;
use reqwest_cookie_store::CookieStoreMutex;
use rpc_toolkit::reqwest::{Client, Url};
use rpc_toolkit::url::Host;
use rpc_toolkit::Context;
use serde::Deserialize;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{call_remote_http, CallRemote, Context};
use tokio::net::TcpStream;
use tokio::runtime::Runtime;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream};
use tracing::instrument;
use super::setup::CURRENT_SECRET;
use crate::context::config::{local_config_path, ClientConfig};
use crate::core::rpc_continuations::RequestGuid;
use crate::middleware::auth::LOCAL_AUTH_COOKIE_PATH;
use crate::util::config::{load_config_from_paths, local_config_path};
use crate::ResultExt;
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct CliContextConfig {
pub host: Option<Url>,
#[serde(deserialize_with = "crate::util::serde::deserialize_from_str_opt")]
#[serde(default)]
pub proxy: Option<Url>,
pub cookie_path: Option<PathBuf>,
}
use crate::prelude::*;
#[derive(Debug)]
pub struct CliContextSeed {
pub runtime: OnceCell<Runtime>,
pub base_url: Url,
pub rpc_url: Url,
pub client: Client,
pub cookie_store: Arc<CookieStoreMutex>,
pub cookie_path: PathBuf,
pub developer_key_path: PathBuf,
pub developer_key: OnceCell<ed25519_dalek::SigningKey>,
}
impl Drop for CliContextSeed {
fn drop(&mut self) {
@@ -60,42 +54,22 @@ impl Drop for CliContextSeed {
}
}
const DEFAULT_HOST: Host<&'static str> = Host::Ipv4(Ipv4Addr::new(127, 0, 0, 1));
const DEFAULT_PORT: u16 = 5959;
#[derive(Debug, Clone)]
pub struct CliContext(Arc<CliContextSeed>);
impl CliContext {
/// BLOCKING
#[instrument(skip_all)]
pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> {
let local_config_path = local_config_path();
let base: CliContextConfig = load_config_from_paths(
matches
.values_of("config")
.into_iter()
.flatten()
.map(|p| Path::new(p))
.chain(local_config_path.as_deref().into_iter())
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)?;
let mut url = if let Some(host) = matches.value_of("host") {
host.parse()?
} else if let Some(host) = base.host {
pub fn init(config: ClientConfig) -> Result<Self, Error> {
let mut url = if let Some(host) = config.host {
host
} else {
"http://localhost".parse()?
};
let proxy = if let Some(proxy) = matches.value_of("proxy") {
Some(proxy.parse()?)
} else {
base.proxy
};
let cookie_path = base.cookie_path.unwrap_or_else(|| {
local_config_path
let cookie_path = config.cookie_path.unwrap_or_else(|| {
local_config_path()
.as_deref()
.unwrap_or_else(|| Path::new(crate::util::config::CONFIG_PATH))
.unwrap_or_else(|| Path::new(super::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join(".cookies.json")
@@ -120,6 +94,7 @@ impl CliContext {
}));
Ok(CliContext(Arc::new(CliContextSeed {
runtime: OnceCell::new(),
base_url: url.clone(),
rpc_url: {
url.path_segments_mut()
@@ -131,7 +106,7 @@ impl CliContext {
},
client: {
let mut builder = Client::builder().cookie_provider(cookie_store.clone());
if let Some(proxy) = proxy {
if let Some(proxy) = config.proxy {
builder =
builder.proxy(Proxy::all(proxy).with_kind(crate::ErrorKind::ParseUrl)?)
}
@@ -139,8 +114,90 @@ impl CliContext {
},
cookie_store,
cookie_path,
developer_key_path: config.developer_key_path.unwrap_or_else(|| {
local_config_path()
.as_deref()
.unwrap_or_else(|| Path::new(super::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join("developer.key.pem")
}),
developer_key: OnceCell::new(),
})))
}
/// BLOCKING
#[instrument(skip_all)]
pub fn developer_key(&self) -> Result<&ed25519_dalek::SigningKey, Error> {
self.developer_key.get_or_try_init(|| {
if !self.developer_key_path.exists() {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `start-cli init` before running this command."), crate::ErrorKind::Uninitialized));
}
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
&std::fs::read_to_string(&self.developer_key_path)?,
)
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
)
})?;
Ok(secret.into())
})
}
pub async fn ws_continuation(
&self,
guid: RequestGuid,
) -> Result<WebSocketStream<MaybeTlsStream<TcpStream>>, Error> {
let mut url = self.base_url.clone();
let ws_scheme = match url.scheme() {
"https" => "wss",
"http" => "ws",
_ => {
return Err(Error::new(
eyre!("Cannot parse scheme from base URL"),
crate::ErrorKind::ParseUrl,
)
.into())
}
};
url.set_scheme(ws_scheme)
.map_err(|_| Error::new(eyre!("Cannot set URL scheme"), crate::ErrorKind::ParseUrl))?;
url.path_segments_mut()
.map_err(|_| eyre!("Url cannot be base"))
.with_kind(crate::ErrorKind::ParseUrl)?
.push("ws")
.push("rpc")
.push(guid.as_ref());
let (stream, _) =
// base_url is "http://127.0.0.1/", with a trailing slash, so we don't put a leading slash in this path:
tokio_tungstenite::connect_async(url).await.with_kind(ErrorKind::Network)?;
Ok(stream)
}
pub async fn rest_continuation(
&self,
guid: RequestGuid,
body: reqwest::Body,
headers: reqwest::header::HeaderMap,
) -> Result<reqwest::Response, Error> {
let mut url = self.base_url.clone();
url.path_segments_mut()
.map_err(|_| eyre!("Url cannot be base"))
.with_kind(crate::ErrorKind::ParseUrl)?
.push("rest")
.push("rpc")
.push(guid.as_ref());
self.client
.post(url)
.headers(headers)
.body(body)
.send()
.await
.with_kind(ErrorKind::Network)
}
}
impl AsRef<Jwk> for CliContext {
fn as_ref(&self) -> &Jwk {
@@ -154,32 +211,33 @@ impl std::ops::Deref for CliContext {
}
}
impl Context for CliContext {
fn protocol(&self) -> &str {
self.0.base_url.scheme()
}
fn host(&self) -> Host<&str> {
self.0.base_url.host().unwrap_or(DEFAULT_HOST)
}
fn port(&self) -> u16 {
self.0.base_url.port().unwrap_or(DEFAULT_PORT)
}
fn path(&self) -> &str {
self.0.rpc_url.path()
}
fn url(&self) -> Url {
self.0.rpc_url.clone()
}
fn client(&self) -> &Client {
&self.0.client
fn runtime(&self) -> tokio::runtime::Handle {
self.runtime
.get_or_init(|| {
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
})
.handle()
.clone()
}
}
/// When we had an empty proxy the system wasn't working like it used to, which allowed empty proxy
#[async_trait::async_trait]
impl CallRemote for CliContext {
async fn call_remote(&self, method: &str, params: Value) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
}
}
#[test]
fn test_cli_proxy_empty() {
serde_yaml::from_str::<CliContextConfig>(
"
bind_rpc:
",
)
.unwrap();
fn test() {
let ctx = CliContext::init(ClientConfig::default()).unwrap();
ctx.runtime().block_on(async {
reqwest::Client::new()
.get("http://example.com")
.send()
.await
.unwrap();
});
}

View File

@@ -0,0 +1,175 @@
use std::fs::File;
use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use clap::Parser;
use patch_db::json_ptr::JsonPointer;
use reqwest::Url;
use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use sqlx::postgres::PgConnectOptions;
use sqlx::PgPool;
use crate::account::AccountInfo;
use crate::db::model::Database;
use crate::disk::OsPartitionInfo;
use crate::init::init_postgres;
use crate::prelude::*;
use crate::util::serde::IoFormat;
pub const DEVICE_CONFIG_PATH: &str = "/media/embassy/config/config.yaml"; // "/media/startos/config/config.yaml";
pub const CONFIG_PATH: &str = "/etc/startos/config.yaml";
pub const CONFIG_PATH_LOCAL: &str = ".startos/config.yaml";
pub fn local_config_path() -> Option<PathBuf> {
if let Ok(home) = std::env::var("HOME") {
Some(Path::new(&home).join(CONFIG_PATH_LOCAL))
} else {
None
}
}
pub trait ContextConfig: DeserializeOwned + Default {
fn next(&mut self) -> Option<PathBuf>;
fn merge_with(&mut self, other: Self);
fn from_path(path: impl AsRef<Path>) -> Result<Self, Error> {
let format: IoFormat = path
.as_ref()
.extension()
.and_then(|s| s.to_str())
.map(|f| f.parse())
.transpose()?
.unwrap_or_default();
format.from_reader(File::open(path)?)
}
fn load_path_rec(&mut self, path: Option<impl AsRef<Path>>) -> Result<(), Error> {
if let Some(path) = path.filter(|p| p.as_ref().exists()) {
let mut other = Self::from_path(path)?;
let path = other.next();
self.merge_with(other);
self.load_path_rec(path)?;
}
Ok(())
}
}
#[derive(Debug, Default, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ClientConfig {
#[arg(short = 'c', long = "config")]
pub config: Option<PathBuf>,
#[arg(short = 'h', long = "host")]
pub host: Option<Url>,
#[arg(short = 'p', long = "proxy")]
pub proxy: Option<Url>,
#[arg(long = "cookie-path")]
pub cookie_path: Option<PathBuf>,
#[arg(long = "developer-key-path")]
pub developer_key_path: Option<PathBuf>,
}
impl ContextConfig for ClientConfig {
fn next(&mut self) -> Option<PathBuf> {
self.config.take()
}
fn merge_with(&mut self, other: Self) {
self.host = self.host.take().or(other.host);
self.proxy = self.proxy.take().or(other.proxy);
self.cookie_path = self.cookie_path.take().or(other.cookie_path);
}
}
impl ClientConfig {
pub fn load(mut self) -> Result<Self, Error> {
let path = self.next();
self.load_path_rec(path)?;
self.load_path_rec(local_config_path())?;
self.load_path_rec(Some(CONFIG_PATH))?;
Ok(self)
}
}
#[derive(Debug, Clone, Default, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ServerConfig {
#[arg(short = 'c', long = "config")]
pub config: Option<PathBuf>,
#[arg(long = "wifi-interface")]
pub wifi_interface: Option<String>,
#[arg(long = "ethernet-interface")]
pub ethernet_interface: Option<String>,
#[arg(skip)]
pub os_partitions: Option<OsPartitionInfo>,
#[arg(long = "bind-rpc")]
pub bind_rpc: Option<SocketAddr>,
#[arg(long = "tor-control")]
pub tor_control: Option<SocketAddr>,
#[arg(long = "tor-socks")]
pub tor_socks: Option<SocketAddr>,
#[arg(long = "dns-bind")]
pub dns_bind: Option<Vec<SocketAddr>>,
#[arg(long = "revision-cache-size")]
pub revision_cache_size: Option<usize>,
#[arg(short = 'd', long = "datadir")]
pub datadir: Option<PathBuf>,
#[arg(long = "disable-encryption")]
pub disable_encryption: Option<bool>,
}
impl ContextConfig for ServerConfig {
fn next(&mut self) -> Option<PathBuf> {
self.config.take()
}
fn merge_with(&mut self, other: Self) {
self.wifi_interface = self.wifi_interface.take().or(other.wifi_interface);
self.ethernet_interface = self.ethernet_interface.take().or(other.ethernet_interface);
self.os_partitions = self.os_partitions.take().or(other.os_partitions);
self.bind_rpc = self.bind_rpc.take().or(other.bind_rpc);
self.tor_control = self.tor_control.take().or(other.tor_control);
self.tor_socks = self.tor_socks.take().or(other.tor_socks);
self.dns_bind = self.dns_bind.take().or(other.dns_bind);
self.revision_cache_size = self
.revision_cache_size
.take()
.or(other.revision_cache_size);
self.datadir = self.datadir.take().or(other.datadir);
self.disable_encryption = self.disable_encryption.take().or(other.disable_encryption);
}
}
impl ServerConfig {
pub fn load(mut self) -> Result<Self, Error> {
let path = self.next();
self.load_path_rec(path)?;
self.load_path_rec(Some(DEVICE_CONFIG_PATH))?;
self.load_path_rec(Some(CONFIG_PATH))?;
Ok(self)
}
pub fn datadir(&self) -> &Path {
self.datadir
.as_deref()
.unwrap_or_else(|| Path::new("/embassy-data"))
}
pub async fn db(&self, account: &AccountInfo) -> Result<PatchDb, Error> {
let db_path = self.datadir().join("main").join("embassy.db");
let db = PatchDb::open(&db_path)
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, db_path.display().to_string()))?;
if !db.exists(&<JsonPointer>::default()).await {
db.put(&<JsonPointer>::default(), &Database::init(account))
.await?;
}
Ok(db)
}
#[instrument(skip_all)]
pub async fn secret_store(&self) -> Result<PgPool, Error> {
init_postgres(self.datadir()).await?;
let secret_store =
PgPool::connect_with(PgConnectOptions::new().database("secrets").username("root"))
.await?;
sqlx::migrate!()
.run(&secret_store)
.await
.with_kind(crate::ErrorKind::Database)?;
Ok(secret_store)
}
}

View File

@@ -1,47 +1,16 @@
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::sync::Arc;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::Context;
use serde::Deserialize;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::shutdown::Shutdown;
use crate::util::config::load_config_from_paths;
use crate::Error;
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct DiagnosticContextConfig {
pub datadir: Option<PathBuf>,
}
impl DiagnosticContextConfig {
#[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || {
load_config_from_paths(
path.as_ref()
.into_iter()
.map(|p| p.as_ref())
.chain(std::iter::once(Path::new(
crate::util::config::DEVICE_CONFIG_PATH,
)))
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)
})
.await
.unwrap()
}
pub fn datadir(&self) -> &Path {
self.datadir
.as_deref()
.unwrap_or_else(|| Path::new("/embassy-data"))
}
}
pub struct DiagnosticContextSeed {
pub datadir: PathBuf,
pub shutdown: Sender<Option<Shutdown>>,
@@ -53,20 +22,18 @@ pub struct DiagnosticContextSeed {
pub struct DiagnosticContext(Arc<DiagnosticContextSeed>);
impl DiagnosticContext {
#[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>(
path: Option<P>,
pub fn init(
config: &ServerConfig,
disk_guid: Option<Arc<String>>,
error: Error,
) -> Result<Self, Error> {
tracing::error!("Error: {}: Starting diagnostic UI", error);
tracing::debug!("{:?}", error);
let cfg = DiagnosticContextConfig::load(path).await?;
let (shutdown, _) = tokio::sync::broadcast::channel(1);
Ok(Self(Arc::new(DiagnosticContextSeed {
datadir: cfg.datadir().to_owned(),
datadir: config.datadir().to_owned(),
shutdown,
disk_guid,
error: Arc::new(error.into()),

View File

@@ -1,35 +1,13 @@
use std::ops::Deref;
use std::path::Path;
use std::sync::Arc;
use rpc_toolkit::Context;
use serde::Deserialize;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::net::utils::find_eth_iface;
use crate::util::config::load_config_from_paths;
use crate::Error;
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct InstallContextConfig {}
impl InstallContextConfig {
#[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || {
load_config_from_paths(
path.as_ref()
.into_iter()
.map(|p| p.as_ref())
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)
})
.await
.unwrap()
}
}
pub struct InstallContextSeed {
pub ethernet_interface: String,
pub shutdown: Sender<()>,
@@ -39,8 +17,7 @@ pub struct InstallContextSeed {
pub struct InstallContext(Arc<InstallContextSeed>);
impl InstallContext {
#[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
let _cfg = InstallContextConfig::load(path.as_ref().map(|p| p.as_ref().to_owned())).await?;
pub async fn init() -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
Ok(Self(Arc::new(InstallContextSeed {
ethernet_interface: find_eth_iface().await?,

View File

@@ -1,44 +1,12 @@
pub mod cli;
pub mod config;
pub mod diagnostic;
pub mod install;
pub mod rpc;
pub mod sdk;
pub mod setup;
pub use cli::CliContext;
pub use diagnostic::DiagnosticContext;
pub use install::InstallContext;
pub use rpc::RpcContext;
pub use sdk::SdkContext;
pub use setup::SetupContext;
impl From<CliContext> for () {
fn from(_: CliContext) -> Self {
()
}
}
impl From<DiagnosticContext> for () {
fn from(_: DiagnosticContext) -> Self {
()
}
}
impl From<RpcContext> for () {
fn from(_: RpcContext) -> Self {
()
}
}
impl From<SdkContext> for () {
fn from(_: SdkContext) -> Self {
()
}
}
impl From<SetupContext> for () {
fn from(_: SetupContext) -> Self {
()
}
}
impl From<InstallContext> for () {
fn from(_: InstallContext) -> Self {
()
}
}

View File

@@ -1,19 +1,16 @@
use std::collections::BTreeMap;
use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4};
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::time::Duration;
use helpers::to_tmp_path;
use imbl_value::InternedString;
use josekit::jwk::Jwk;
use patch_db::json_ptr::JsonPointer;
use patch_db::PatchDb;
use reqwest::{Client, Proxy, Url};
use reqwest::{Client, Proxy};
use rpc_toolkit::Context;
use serde::Deserialize;
use sqlx::postgres::PgConnectOptions;
use sqlx::PgPool;
use tokio::sync::{broadcast, oneshot, Mutex, RwLock};
use tokio::time::Instant;
@@ -21,87 +18,26 @@ use tracing::instrument;
use super::setup::CURRENT_SECRET;
use crate::account::AccountInfo;
use crate::core::rpc_continuations::{RequestGuid, RestHandler, RpcContinuation};
use crate::db::model::{CurrentDependents, Database, PackageDataEntryMatchModelRef};
use crate::context::config::ServerConfig;
use crate::core::rpc_continuations::{RequestGuid, RestHandler, RpcContinuation, WebSocketHandler};
use crate::db::model::CurrentDependents;
use crate::db::prelude::PatchDbExt;
use crate::dependencies::compute_dependency_config_errs;
use crate::disk::OsPartitionInfo;
use crate::init::{check_time_is_synchronized, init_postgres};
use crate::install::cleanup::{cleanup_failed, uninstall};
use crate::manager::ManagerMap;
use crate::init::check_time_is_synchronized;
use crate::lxc::{LxcContainer, LxcManager};
use crate::middleware::auth::HashSessionToken;
use crate::net::net_controller::NetController;
use crate::net::ssl::{root_ca_start_time, SslManager};
use crate::net::utils::find_eth_iface;
use crate::net::wifi::WpaCli;
use crate::notifications::NotificationManager;
use crate::prelude::*;
use crate::service::ServiceMap;
use crate::shutdown::Shutdown;
use crate::status::MainStatus;
use crate::system::get_mem_info;
use crate::util::config::load_config_from_paths;
use crate::util::lshw::{lshw, LshwDevice};
use crate::{Error, ErrorKind, ResultExt};
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct RpcContextConfig {
pub wifi_interface: Option<String>,
pub ethernet_interface: String,
pub os_partitions: OsPartitionInfo,
pub migration_batch_rows: Option<usize>,
pub migration_prefetch_rows: Option<usize>,
pub bind_rpc: Option<SocketAddr>,
pub tor_control: Option<SocketAddr>,
pub tor_socks: Option<SocketAddr>,
pub dns_bind: Option<Vec<SocketAddr>>,
pub revision_cache_size: Option<usize>,
pub datadir: Option<PathBuf>,
pub log_server: Option<Url>,
}
impl RpcContextConfig {
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || {
load_config_from_paths(
path.as_ref()
.into_iter()
.map(|p| p.as_ref())
.chain(std::iter::once(Path::new(
crate::util::config::DEVICE_CONFIG_PATH,
)))
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)
})
.await
.unwrap()
}
pub fn datadir(&self) -> &Path {
self.datadir
.as_deref()
.unwrap_or_else(|| Path::new("/embassy-data"))
}
pub async fn db(&self, account: &AccountInfo) -> Result<PatchDb, Error> {
let db_path = self.datadir().join("main").join("embassy.db");
let db = PatchDb::open(&db_path)
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, db_path.display().to_string()))?;
if !db.exists(&<JsonPointer>::default()).await {
db.put(&<JsonPointer>::default(), &Database::init(account))
.await?;
}
Ok(db)
}
#[instrument(skip_all)]
pub async fn secret_store(&self) -> Result<PgPool, Error> {
init_postgres(self.datadir()).await?;
let secret_store =
PgPool::connect_with(PgConnectOptions::new().database("secrets").username("root"))
.await?;
sqlx::migrate!()
.run(&secret_store)
.await
.with_kind(crate::ErrorKind::Database)?;
Ok(secret_store)
}
}
pub struct RpcContextSeed {
is_closed: AtomicBool,
@@ -114,11 +50,12 @@ pub struct RpcContextSeed {
pub secret_store: PgPool,
pub account: RwLock<AccountInfo>,
pub net_controller: Arc<NetController>,
pub managers: ManagerMap,
pub services: ServiceMap,
pub metrics_cache: RwLock<Option<crate::system::Metrics>>,
pub shutdown: broadcast::Sender<Option<Shutdown>>,
pub tor_socks: SocketAddr,
pub notification_manager: NotificationManager,
pub lxc_manager: Arc<LxcManager>,
pub open_authed_websockets: Mutex<BTreeMap<HashSessionToken, Vec<oneshot::Sender<()>>>>,
pub rpc_stream_continuations: Mutex<BTreeMap<RequestGuid, RpcContinuation>>,
pub wifi_manager: Option<Arc<RwLock<WpaCli>>>,
@@ -126,6 +63,11 @@ pub struct RpcContextSeed {
pub client: Client,
pub hardware: Hardware,
pub start_time: Instant,
pub dev: Dev,
}
pub struct Dev {
pub lxc: Mutex<BTreeMap<InternedString, LxcContainer>>,
}
pub struct Hardware {
@@ -137,28 +79,26 @@ pub struct Hardware {
pub struct RpcContext(Arc<RpcContextSeed>);
impl RpcContext {
#[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + Sync + 'static>(
cfg_path: Option<P>,
disk_guid: Arc<String>,
) -> Result<Self, Error> {
let base = RpcContextConfig::load(cfg_path).await?;
pub async fn init(config: &ServerConfig, disk_guid: Arc<String>) -> Result<Self, Error> {
tracing::info!("Loaded Config");
let tor_proxy = base.tor_socks.unwrap_or(SocketAddr::V4(SocketAddrV4::new(
let tor_proxy = config.tor_socks.unwrap_or(SocketAddr::V4(SocketAddrV4::new(
Ipv4Addr::new(127, 0, 0, 1),
9050,
)));
let (shutdown, _) = tokio::sync::broadcast::channel(1);
let secret_store = base.secret_store().await?;
let secret_store = config.secret_store().await?;
tracing::info!("Opened Pg DB");
let account = AccountInfo::load(&secret_store).await?;
let db = base.db(&account).await?;
let db = config.db(&account).await?;
tracing::info!("Opened PatchDB");
let net_controller = Arc::new(
NetController::init(
base.tor_control
config
.tor_control
.unwrap_or(SocketAddr::from(([127, 0, 0, 1], 9051))),
tor_proxy,
base.dns_bind
config
.dns_bind
.as_deref()
.unwrap_or(&[SocketAddr::from(([127, 0, 0, 1], 53))]),
SslManager::new(&account, root_ca_start_time().await?)?,
@@ -168,7 +108,7 @@ impl RpcContext {
.await?,
);
tracing::info!("Initialized Net Controller");
let managers = ManagerMap::default();
let services = ServiceMap::default();
let metrics_cache = RwLock::<Option<crate::system::Metrics>>::new(None);
let notification_manager = NotificationManager::new(secret_store.clone());
tracing::info!("Initialized Notification Manager");
@@ -190,24 +130,35 @@ impl RpcContext {
let seed = Arc::new(RpcContextSeed {
is_closed: AtomicBool::new(false),
datadir: base.datadir().to_path_buf(),
os_partitions: base.os_partitions,
wifi_interface: base.wifi_interface.clone(),
ethernet_interface: base.ethernet_interface,
datadir: config.datadir().to_path_buf(),
os_partitions: config.os_partitions.clone().ok_or_else(|| {
Error::new(
eyre!("OS Partition Information Missing"),
ErrorKind::Filesystem,
)
})?,
wifi_interface: config.wifi_interface.clone(),
ethernet_interface: if let Some(eth) = config.ethernet_interface.clone() {
eth
} else {
find_eth_iface().await?
},
disk_guid,
db,
secret_store,
account: RwLock::new(account),
net_controller,
managers,
services,
metrics_cache,
shutdown,
tor_socks: tor_proxy,
notification_manager,
lxc_manager: Arc::new(LxcManager::new()),
open_authed_websockets: Mutex::new(BTreeMap::new()),
rpc_stream_continuations: Mutex::new(BTreeMap::new()),
wifi_manager: base
wifi_manager: config
.wifi_interface
.clone()
.map(|i| Arc::new(RwLock::new(WpaCli::init(i)))),
current_secret: Arc::new(
Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| {
@@ -231,6 +182,9 @@ impl RpcContext {
.with_kind(crate::ErrorKind::ParseUrl)?,
hardware: Hardware { devices, ram },
start_time: Instant::now(),
dev: Dev {
lxc: Mutex::new(BTreeMap::new()),
},
});
let res = Self(seed.clone());
@@ -241,7 +195,7 @@ impl RpcContext {
#[instrument(skip_all)]
pub async fn shutdown(self) -> Result<(), Error> {
self.managers.empty().await?;
self.services.shutdown_all().await?;
self.secret_store.close().await;
self.is_closed.store(true, Ordering::SeqCst);
tracing::info!("RPC Context is shutdown");
@@ -293,70 +247,11 @@ impl RpcContext {
})
.await?;
let peek = self.db.peek().await;
for (package_id, package) in peek.as_package_data().as_entries()?.into_iter() {
let action = match package.as_match() {
PackageDataEntryMatchModelRef::Installing(_)
| PackageDataEntryMatchModelRef::Restoring(_)
| PackageDataEntryMatchModelRef::Updating(_) => {
cleanup_failed(self, &package_id).await
}
PackageDataEntryMatchModelRef::Removing(_) => {
uninstall(
self,
self.secret_store.acquire().await?.as_mut(),
&package_id,
)
.await
}
PackageDataEntryMatchModelRef::Installed(m) => {
let version = m.as_manifest().as_version().clone().de()?;
let volumes = m.as_manifest().as_volumes().de()?;
for (volume_id, volume_info) in &*volumes {
let tmp_path = to_tmp_path(volume_info.path_for(
&self.datadir,
&package_id,
&version,
volume_id,
))
.with_kind(ErrorKind::Filesystem)?;
if tokio::fs::metadata(&tmp_path).await.is_ok() {
tokio::fs::remove_dir_all(&tmp_path).await?;
}
}
Ok(())
}
_ => continue,
};
if let Err(e) = action {
tracing::error!("Failed to clean up package {}: {}", package_id, e);
tracing::debug!("{:?}", e);
}
}
let peek = self
.db
.mutate(|v| {
for (_, pde) in v.as_package_data_mut().as_entries_mut()? {
let status = pde
.expect_as_installed_mut()?
.as_installed_mut()
.as_status_mut()
.as_main_mut();
let running = status.clone().de()?.running();
status.ser(&if running {
MainStatus::Starting
} else {
MainStatus::Stopped
})?;
}
Ok(v.clone())
})
.await?;
self.managers.init(self.clone(), peek.clone()).await?;
self.services.init(&self).await?;
tracing::info!("Initialized Package Managers");
let mut all_dependency_config_errs = BTreeMap::new();
let peek = self.db.peek().await;
for (package_id, package) in peek.as_package_data().as_entries()?.into_iter() {
let package = package.clone();
if let Some(current_dependencies) = package
@@ -419,33 +314,30 @@ impl RpcContext {
.insert(guid, handler);
}
pub async fn get_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
pub async fn get_ws_continuation_handler(
&self,
guid: &RequestGuid,
) -> Option<WebSocketHandler> {
let mut continuations = self.rpc_stream_continuations.lock().await;
if let Some(cont) = continuations.remove(guid) {
cont.into_handler().await
} else {
None
}
}
pub async fn get_ws_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
let continuations = self.rpc_stream_continuations.lock().await;
if matches!(continuations.get(guid), Some(RpcContinuation::WebSocket(_))) {
drop(continuations);
self.get_continuation_handler(guid).await
} else {
None
if !matches!(continuations.get(guid), Some(RpcContinuation::WebSocket(_))) {
return None;
}
let Some(RpcContinuation::WebSocket(x)) = continuations.remove(guid) else {
return None;
};
x.get().await
}
pub async fn get_rest_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
let continuations = self.rpc_stream_continuations.lock().await;
if matches!(continuations.get(guid), Some(RpcContinuation::Rest(_))) {
drop(continuations);
self.get_continuation_handler(guid).await
} else {
None
let mut continuations: tokio::sync::MutexGuard<'_, BTreeMap<RequestGuid, RpcContinuation>> =
self.rpc_stream_continuations.lock().await;
if !matches!(continuations.get(guid), Some(RpcContinuation::Rest(_))) {
return None;
}
let Some(RpcContinuation::Rest(x)) = continuations.remove(guid) else {
return None;
};
x.get().await
}
}
impl AsRef<Jwk> for RpcContext {

View File

@@ -8,13 +8,6 @@ use serde::Deserialize;
use tracing::instrument;
use crate::prelude::*;
use crate::util::config::{load_config_from_paths, local_config_path};
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct SdkContextConfig {
pub developer_key_path: Option<PathBuf>,
}
#[derive(Debug)]
pub struct SdkContextSeed {
@@ -26,7 +19,7 @@ pub struct SdkContext(Arc<SdkContextSeed>);
impl SdkContext {
/// BLOCKING
#[instrument(skip_all)]
pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> {
pub fn init(config: ) -> Result<Self, crate::Error> {
let local_config_path = local_config_path();
let base: SdkContextConfig = load_config_from_paths(
matches
@@ -48,24 +41,7 @@ impl SdkContext {
}),
})))
}
/// BLOCKING
#[instrument(skip_all)]
pub fn developer_key(&self) -> Result<ed25519_dalek::SigningKey, Error> {
if !self.developer_key_path.exists() {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `start-sdk init` before running this command."), crate::ErrorKind::Uninitialized));
}
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
&std::fs::read_to_string(&self.developer_key_path)?,
)
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
)
})?;
Ok(secret.into())
}
}
impl std::ops::Deref for SdkContext {
type Target = SdkContextSeed;

View File

@@ -1,5 +1,5 @@
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::sync::Arc;
use josekit::jwk::Jwk;
@@ -15,12 +15,12 @@ use tokio::sync::RwLock;
use tracing::instrument;
use crate::account::AccountInfo;
use crate::context::config::ServerConfig;
use crate::db::model::Database;
use crate::disk::OsPartitionInfo;
use crate::init::init_postgres;
use crate::prelude::*;
use crate::setup::SetupStatus;
use crate::util::config::load_config_from_paths;
use crate::{Error, ResultExt};
lazy_static::lazy_static! {
pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| {
@@ -38,45 +38,9 @@ pub struct SetupResult {
pub root_ca: String,
}
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct SetupContextConfig {
pub os_partitions: OsPartitionInfo,
pub migration_batch_rows: Option<usize>,
pub migration_prefetch_rows: Option<usize>,
pub datadir: Option<PathBuf>,
#[serde(default)]
pub disable_encryption: bool,
}
impl SetupContextConfig {
#[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || {
load_config_from_paths(
path.as_ref()
.into_iter()
.map(|p| p.as_ref())
.chain(std::iter::once(Path::new(
crate::util::config::DEVICE_CONFIG_PATH,
)))
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)
})
.await
.unwrap()
}
pub fn datadir(&self) -> &Path {
self.datadir
.as_deref()
.unwrap_or_else(|| Path::new("/embassy-data"))
}
}
pub struct SetupContextSeed {
pub config: ServerConfig,
pub os_partitions: OsPartitionInfo,
pub config_path: Option<PathBuf>,
pub migration_batch_rows: usize,
pub migration_prefetch_rows: usize,
pub disable_encryption: bool,
pub shutdown: Sender<()>,
pub datadir: PathBuf,
@@ -96,16 +60,18 @@ impl AsRef<Jwk> for SetupContextSeed {
pub struct SetupContext(Arc<SetupContextSeed>);
impl SetupContext {
#[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
let cfg = SetupContextConfig::load(path.as_ref().map(|p| p.as_ref().to_owned())).await?;
pub fn init(config: &ServerConfig) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
let datadir = cfg.datadir().to_owned();
let datadir = config.datadir().to_owned();
Ok(Self(Arc::new(SetupContextSeed {
os_partitions: cfg.os_partitions,
config_path: path.as_ref().map(|p| p.as_ref().to_owned()),
migration_batch_rows: cfg.migration_batch_rows.unwrap_or(25000),
migration_prefetch_rows: cfg.migration_prefetch_rows.unwrap_or(100_000),
disable_encryption: cfg.disable_encryption,
config: config.clone(),
os_partitions: config.os_partitions.clone().ok_or_else(|| {
Error::new(
eyre!("missing required configuration: `os-partitions`"),
ErrorKind::NotFound,
)
})?,
disable_encryption: config.disable_encryption.unwrap_or(false),
shutdown,
datadir,
selected_v2_drive: RwLock::new(None),

View File

@@ -1,89 +1,52 @@
use clap::Parser;
use color_eyre::eyre::eyre;
use models::PackageId;
use rpc_toolkit::command;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::status::MainStatus;
use crate::util::display_none;
use crate::Error;
#[command(display(display_none), metadata(sync_db = true))]
#[instrument(skip_all)]
pub async fn start(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<(), Error> {
let peek = ctx.db.peek().await;
let version = peek
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.as_installed()
.or_not_found(&id)?
.as_manifest()
.as_version()
.de()?;
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ControlParams {
pub id: PackageId,
}
ctx.managers
.get(&(id, version))
#[instrument(skip_all)]
pub async fn start(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services
.get(&id)
.await
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.as_ref()
.or_not_found(lazy_format!("Manager for {id}"))?
.start()
.await;
Ok(())
}
#[command(display(display_none), metadata(sync_db = true))]
pub async fn stop(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<MainStatus, Error> {
let peek = ctx.db.peek().await;
let version = peek
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.as_installed()
.or_not_found(&id)?
.as_manifest()
.as_version()
.de()?;
let last_statuts = ctx
.db
.mutate(|v| {
v.as_package_data_mut()
.as_idx_mut(&id)
.and_then(|x| x.as_installed_mut())
.ok_or_else(|| Error::new(eyre!("{} is not installed", id), ErrorKind::NotFound))?
.as_status_mut()
.as_main_mut()
.replace(&MainStatus::Stopping)
})
.await?;
ctx.managers
.get(&(id, version))
pub async fn stop(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
// TODO: why did this return last_status before?
ctx.services
.get(&id)
.await
.as_ref()
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.stop()
.await;
Ok(last_statuts)
Ok(())
}
#[command(display(display_none), metadata(sync_db = true))]
pub async fn restart(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<(), Error> {
let peek = ctx.db.peek().await;
let version = peek
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.expect_as_installed()?
.as_manifest()
.as_version()
.de()?;
ctx.managers
.get(&(id, version))
pub async fn restart(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services
.get(&id)
.await
.as_ref()
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.restart()
.await;

View File

@@ -1,27 +1,21 @@
use std::sync::Arc;
use std::time::Duration;
use axum::extract::ws::WebSocket;
use axum::extract::Request;
use axum::response::Response;
use futures::future::BoxFuture;
use futures::FutureExt;
use helpers::TimedResource;
use hyper::upgrade::Upgraded;
use hyper::{Body, Error as HyperError, Request, Response};
use rand::RngCore;
use tokio::task::JoinError;
use tokio_tungstenite::WebSocketStream;
use imbl_value::InternedString;
use crate::{Error, ResultExt};
#[allow(unused_imports)]
use crate::prelude::*;
use crate::util::new_guid;
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, serde::Serialize, serde::Deserialize)]
pub struct RequestGuid<T: AsRef<str> = String>(Arc<T>);
pub struct RequestGuid(InternedString);
impl RequestGuid {
pub fn new() -> Self {
let mut buf = [0; 40];
rand::thread_rng().fill_bytes(&mut buf);
RequestGuid(Arc::new(base32::encode(
base32::Alphabet::RFC4648 { padding: false },
&buf,
)))
Self(new_guid())
}
pub fn from(r: &str) -> Option<RequestGuid> {
@@ -33,9 +27,15 @@ impl RequestGuid {
return None;
}
}
Some(RequestGuid(Arc::new(r.to_owned())))
Some(RequestGuid(InternedString::intern(r)))
}
}
impl AsRef<str> for RequestGuid {
fn as_ref(&self) -> &str {
self.0.as_ref()
}
}
#[test]
fn parse_guid() {
println!(
@@ -44,22 +44,16 @@ fn parse_guid() {
)
}
impl<T: AsRef<str>> std::fmt::Display for RequestGuid<T> {
impl std::fmt::Display for RequestGuid {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
(&*self.0).as_ref().fmt(f)
self.0.fmt(f)
}
}
pub type RestHandler = Box<
dyn FnOnce(Request<Body>) -> BoxFuture<'static, Result<Response<Body>, crate::Error>> + Send,
>;
pub type RestHandler =
Box<dyn FnOnce(Request) -> BoxFuture<'static, Result<Response, crate::Error>> + Send>;
pub type WebSocketHandler = Box<
dyn FnOnce(
BoxFuture<'static, Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
) -> BoxFuture<'static, Result<(), Error>>
+ Send,
>;
pub type WebSocketHandler = Box<dyn FnOnce(WebSocket) -> BoxFuture<'static, ()> + Send>;
pub enum RpcContinuation {
Rest(TimedResource<RestHandler>),
@@ -78,39 +72,4 @@ impl RpcContinuation {
RpcContinuation::WebSocket(a) => a.is_timed_out(),
}
}
pub async fn into_handler(self) -> Option<RestHandler> {
match self {
RpcContinuation::Rest(handler) => handler.get().await,
RpcContinuation::WebSocket(handler) => {
if let Some(handler) = handler.get().await {
Some(Box::new(
|req: Request<Body>| -> BoxFuture<'static, Result<Response<Body>, Error>> {
async move {
let (parts, body) = req.into_parts();
let req = Request::from_parts(parts, body);
let (res, ws_fut) = hyper_ws_listener::create_ws(req)
.with_kind(crate::ErrorKind::Network)?;
if let Some(ws_fut) = ws_fut {
tokio::task::spawn(async move {
match handler(ws_fut.boxed()).await {
Ok(()) => (),
Err(e) => {
tracing::error!("WebSocket Closed: {}", e);
tracing::debug!("{:?}", e);
}
}
});
}
Ok(res)
}
.boxed()
},
))
} else {
None
}
}
}
}
}

View File

@@ -1,61 +1,52 @@
pub mod model;
pub mod package;
pub mod prelude;
use std::future::Future;
use std::path::PathBuf;
use std::sync::Arc;
use futures::{FutureExt, SinkExt, StreamExt};
use axum::extract::ws::{self, WebSocket};
use axum::extract::WebSocketUpgrade;
use axum::response::Response;
use clap::Parser;
use futures::{FutureExt, StreamExt};
use http::header::COOKIE;
use http::HeaderMap;
use patch_db::json_ptr::JsonPointer;
use patch_db::{Dump, Revision};
use rpc_toolkit::command;
use rpc_toolkit::hyper::upgrade::Upgraded;
use rpc_toolkit::hyper::{Body, Error as HyperError, Request, Response};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, from_fn_async, CallRemote, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use tokio::sync::oneshot;
use tokio::task::JoinError;
use tokio_tungstenite::tungstenite::protocol::frame::coding::CloseCode;
use tokio_tungstenite::tungstenite::protocol::CloseFrame;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::WebSocketStream;
use tracing::instrument;
use crate::context::{CliContext, RpcContext};
use crate::middleware::auth::{HasValidSession, HashSessionToken};
use crate::prelude::*;
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat};
use crate::util::serde::{apply_expr, HandlerExtSerde};
#[instrument(skip_all)]
async fn ws_handler<
WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
>(
async fn ws_handler(
ctx: RpcContext,
session: Option<(HasValidSession, HashSessionToken)>,
ws_fut: WSFut,
mut stream: WebSocket,
) -> Result<(), Error> {
let (dump, sub) = ctx.db.dump_and_sub().await;
let mut stream = ws_fut
.await
.with_kind(ErrorKind::Network)?
.with_kind(ErrorKind::Unknown)?;
if let Some((session, token)) = session {
let kill = subscribe_to_session_kill(&ctx, token).await;
send_dump(session, &mut stream, dump).await?;
send_dump(session.clone(), &mut stream, dump).await?;
deal_with_messages(session, kill, sub, stream).await?;
} else {
stream
.close(Some(CloseFrame {
code: CloseCode::Error,
.send(ws::Message::Close(Some(ws::CloseFrame {
code: ws::close_code::ERROR,
reason: "UNAUTHORIZED".into(),
}))
})))
.await
.with_kind(ErrorKind::Network)?;
drop(stream);
}
Ok(())
@@ -80,7 +71,7 @@ async fn deal_with_messages(
_has_valid_authentication: HasValidSession,
mut kill: oneshot::Receiver<()>,
mut sub: patch_db::Subscriber,
mut stream: WebSocketStream<Upgraded>,
mut stream: WebSocket,
) -> Result<(), Error> {
let mut timer = tokio::time::interval(tokio::time::Duration::from_secs(5));
@@ -89,18 +80,18 @@ async fn deal_with_messages(
_ = (&mut kill).fuse() => {
tracing::info!("Closing WebSocket: Reason: Session Terminated");
stream
.close(Some(CloseFrame {
code: CloseCode::Error,
reason: "UNAUTHORIZED".into(),
}))
.await
.with_kind(ErrorKind::Network)?;
.send(ws::Message::Close(Some(ws::CloseFrame {
code: ws::close_code::ERROR,
reason: "UNAUTHORIZED".into(),
}))).await
.with_kind(ErrorKind::Network)?;
drop(stream);
return Ok(())
}
new_rev = sub.recv().fuse() => {
let rev = new_rev.expect("UNREACHABLE: patch-db is dropped");
stream
.send(Message::Text(serde_json::to_string(&rev).with_kind(ErrorKind::Serialization)?))
.send(ws::Message::Text(serde_json::to_string(&rev).with_kind(ErrorKind::Serialization)?))
.await
.with_kind(ErrorKind::Network)?;
}
@@ -117,7 +108,7 @@ async fn deal_with_messages(
// This is trying to give a health checks to the home to keep the ui alive.
_ = timer.tick().fuse() => {
stream
.send(Message::Ping(vec![]))
.send(ws::Message::Ping(vec![]))
.await
.with_kind(crate::ErrorKind::Network)?;
}
@@ -127,11 +118,11 @@ async fn deal_with_messages(
async fn send_dump(
_has_valid_authentication: HasValidSession,
stream: &mut WebSocketStream<Upgraded>,
stream: &mut WebSocket,
dump: Dump,
) -> Result<(), Error> {
stream
.send(Message::Text(
.send(ws::Message::Text(
serde_json::to_string(&dump).with_kind(ErrorKind::Serialization)?,
))
.await
@@ -139,11 +130,14 @@ async fn send_dump(
Ok(())
}
pub async fn subscribe(ctx: RpcContext, req: Request<Body>) -> Result<Response<Body>, Error> {
let (parts, body) = req.into_parts();
pub async fn subscribe(
ctx: RpcContext,
headers: HeaderMap,
ws: WebSocketUpgrade,
) -> Result<Response, Error> {
let session = match async {
let token = HashSessionToken::from_request_parts(&parts)?;
let session = HasValidSession::from_request_parts(&parts, &ctx).await?;
let token = HashSessionToken::from_header(headers.get(COOKIE))?;
let session = HasValidSession::from_header(headers.get(COOKIE), &ctx).await?;
Ok::<_, Error>((session, token))
}
.await
@@ -157,26 +151,24 @@ pub async fn subscribe(ctx: RpcContext, req: Request<Body>) -> Result<Response<B
None
}
};
let req = Request::from_parts(parts, body);
let (res, ws_fut) = hyper_ws_listener::create_ws(req).with_kind(ErrorKind::Network)?;
if let Some(ws_fut) = ws_fut {
tokio::task::spawn(async move {
match ws_handler(ctx, session, ws_fut).await {
Ok(()) => (),
Err(e) => {
tracing::error!("WebSocket Closed: {}", e);
tracing::debug!("{:?}", e);
}
Ok(ws.on_upgrade(|ws| async move {
match ws_handler(ctx, session, ws).await {
Ok(()) => (),
Err(e) => {
tracing::error!("WebSocket Closed: {}", e);
tracing::debug!("{:?}", e);
}
});
}
Ok(res)
}
}))
}
#[command(subcommands(dump, put, apply))]
pub fn db() -> Result<(), RpcError> {
Ok(())
pub fn db() -> ParentHandler {
ParentHandler::new()
.subcommand("dump", from_fn_async(cli_dump).with_display_serializable())
.subcommand("dump", from_fn_async(dump).no_cli())
.subcommand("put", put())
.subcommand("apply", from_fn_async(cli_apply).no_display())
.subcommand("apply", from_fn_async(apply).no_cli())
}
#[derive(Deserialize, Serialize)]
@@ -187,96 +179,36 @@ pub enum RevisionsRes {
}
#[instrument(skip_all)]
async fn cli_dump(
ctx: CliContext,
_format: Option<IoFormat>,
path: Option<PathBuf>,
) -> Result<Dump, RpcError> {
async fn cli_dump(ctx: CliContext, DumpParams { path }: DumpParams) -> Result<Dump, RpcError> {
let dump = if let Some(path) = path {
PatchDb::open(path).await?.dump().await
} else {
rpc_toolkit::command_helpers::call_remote(
ctx,
"db.dump",
serde_json::json!({}),
std::marker::PhantomData::<Dump>,
)
.await?
.result?
from_value::<Dump>(ctx.call_remote("db.dump", imbl_value::json!({})).await?)?
};
Ok(dump)
}
#[command(
custom_cli(cli_dump(async, context(CliContext))),
display(display_serializable)
)]
pub async fn dump(
#[context] ctx: RpcContext,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
#[allow(unused_variables)]
#[arg]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct DumpParams {
path: Option<PathBuf>,
) -> Result<Dump, Error> {
}
// #[command(
// custom_cli(cli_dump(async, context(CliContext))),
// display(display_serializable)
// )]
pub async fn dump(ctx: RpcContext, _: DumpParams) -> Result<Dump, Error> {
Ok(ctx.db.dump().await)
}
fn apply_expr(input: jaq_core::Val, expr: &str) -> Result<jaq_core::Val, Error> {
let (expr, errs) = jaq_core::parse::parse(expr, jaq_core::parse::main());
let Some(expr) = expr else {
return Err(Error::new(
eyre!("Failed to parse expression: {:?}", errs),
crate::ErrorKind::InvalidRequest,
));
};
let mut errs = Vec::new();
let mut defs = jaq_core::Definitions::core();
for def in jaq_std::std() {
defs.insert(def, &mut errs);
}
let filter = defs.finish(expr, Vec::new(), &mut errs);
if !errs.is_empty() {
return Err(Error::new(
eyre!("Failed to compile expression: {:?}", errs),
crate::ErrorKind::InvalidRequest,
));
};
let inputs = jaq_core::RcIter::new(std::iter::empty());
let mut res_iter = filter.run(jaq_core::Ctx::new([], &inputs), input);
let Some(res) = res_iter
.next()
.transpose()
.map_err(|e| eyre!("{e}"))
.with_kind(crate::ErrorKind::Deserialization)?
else {
return Err(Error::new(
eyre!("expr returned no results"),
crate::ErrorKind::InvalidRequest,
));
};
if res_iter.next().is_some() {
return Err(Error::new(
eyre!("expr returned too many results"),
crate::ErrorKind::InvalidRequest,
));
}
Ok(res)
}
#[instrument(skip_all)]
async fn cli_apply(ctx: CliContext, expr: String, path: Option<PathBuf>) -> Result<(), RpcError> {
async fn cli_apply(
ctx: CliContext,
ApplyParams { expr, path }: ApplyParams,
) -> Result<(), RpcError> {
if let Some(path) = path {
PatchDb::open(path)
.await?
@@ -301,30 +233,22 @@ async fn cli_apply(ctx: CliContext, expr: String, path: Option<PathBuf>) -> Resu
})
.await?;
} else {
rpc_toolkit::command_helpers::call_remote(
ctx,
"db.apply",
serde_json::json!({ "expr": expr }),
std::marker::PhantomData::<()>,
)
.await?
.result?;
ctx.call_remote("db.apply", imbl_value::json!({ "expr": expr }))
.await?;
}
Ok(())
}
#[command(
custom_cli(cli_apply(async, context(CliContext))),
display(display_none)
)]
pub async fn apply(
#[context] ctx: RpcContext,
#[arg] expr: String,
#[allow(unused_variables)]
#[arg]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ApplyParams {
expr: String,
path: Option<PathBuf>,
) -> Result<(), Error> {
}
pub async fn apply(ctx: RpcContext, ApplyParams { expr, .. }: ApplyParams) -> Result<(), Error> {
ctx.db
.mutate(|db| {
let res = apply_expr(
@@ -346,21 +270,25 @@ pub async fn apply(
.await
}
#[command(subcommands(ui))]
pub fn put() -> Result<(), RpcError> {
Ok(())
pub fn put() -> ParentHandler {
ParentHandler::new().subcommand(
"ui",
from_fn_async(ui)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct UiParams {
pointer: JsonPointer,
value: Value,
}
#[command(display(display_serializable))]
// #[command(display(display_serializable))]
#[instrument(skip_all)]
pub async fn ui(
#[context] ctx: RpcContext,
#[arg] pointer: JsonPointer,
#[arg] value: Value,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<(), Error> {
pub async fn ui(ctx: RpcContext, UiParams { pointer, value, .. }: UiParams) -> Result<(), Error> {
let ptr = "/ui"
.parse::<JsonPointer>()
.with_kind(ErrorKind::Database)?

View File

@@ -1,6 +1,5 @@
use std::collections::{BTreeMap, BTreeSet};
use std::net::{Ipv4Addr, Ipv6Addr};
use std::sync::Arc;
use chrono::{DateTime, Utc};
use emver::VersionRange;
@@ -8,8 +7,9 @@ use imbl_value::InternedString;
use ipnet::{Ipv4Net, Ipv6Net};
use isocountry::CountryCode;
use itertools::Itertools;
use models::{DataUrl, HealthCheckId, InterfaceId};
use models::{DataUrl, HealthCheckId, InterfaceId, PackageId};
use openssl::hash::MessageDigest;
use patch_db::json_ptr::JsonPointer;
use patch_db::{HasModel, Value};
use reqwest::Url;
use serde::{Deserialize, Serialize};
@@ -17,12 +17,12 @@ use ssh_key::public::Ed25519PublicKey;
use crate::account::AccountInfo;
use crate::config::spec::PackagePointerSpec;
use crate::install::progress::InstallProgress;
use crate::net::utils::{get_iface_ipv4_addr, get_iface_ipv6_addr};
use crate::prelude::*;
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::progress::FullProgress;
use crate::s9pk::manifest::Manifest;
use crate::status::Status;
use crate::util::cpupower::{Governor};
use crate::util::cpupower::Governor;
use crate::util::Version;
use crate::version::{Current, VersionT};
use crate::{ARCH, PLATFORM};
@@ -225,14 +225,14 @@ impl Map for AllPackageData {
pub struct StaticFiles {
license: String,
instructions: String,
icon: String,
icon: DataUrl<'static>,
}
impl StaticFiles {
pub fn local(id: &PackageId, version: &Version, icon_type: &str) -> Self {
pub fn local(id: &PackageId, version: &Version, icon: DataUrl<'static>) -> Self {
StaticFiles {
license: format!("/public/package-data/{}/{}/LICENSE.md", id, version),
instructions: format!("/public/package-data/{}/{}/INSTRUCTIONS.md", id, version),
icon: format!("/public/package-data/{}/{}/icon.{}", id, version, icon_type),
icon,
}
}
}
@@ -243,7 +243,7 @@ impl StaticFiles {
pub struct PackageDataEntryInstalling {
pub static_files: StaticFiles,
pub manifest: Manifest,
pub install_progress: Arc<InstallProgress>,
pub install_progress: FullProgress,
}
#[derive(Debug, Deserialize, Serialize, HasModel)]
@@ -253,7 +253,7 @@ pub struct PackageDataEntryUpdating {
pub static_files: StaticFiles,
pub manifest: Manifest,
pub installed: InstalledPackageInfo,
pub install_progress: Arc<InstallProgress>,
pub install_progress: FullProgress,
}
#[derive(Debug, Deserialize, Serialize, HasModel)]
@@ -262,7 +262,7 @@ pub struct PackageDataEntryUpdating {
pub struct PackageDataEntryRestoring {
pub static_files: StaticFiles,
pub manifest: Manifest,
pub install_progress: Arc<InstallProgress>,
pub install_progress: FullProgress,
}
#[derive(Debug, Deserialize, Serialize, HasModel)]
@@ -422,7 +422,7 @@ impl Model<PackageDataEntry> {
PackageDataEntryMatchModelMut::Error(_) => None,
}
}
pub fn as_install_progress(&self) -> Option<&Model<Arc<InstallProgress>>> {
pub fn as_install_progress(&self) -> Option<&Model<FullProgress>> {
match self.as_match() {
PackageDataEntryMatchModelRef::Installing(a) => Some(a.as_install_progress()),
PackageDataEntryMatchModelRef::Updating(a) => Some(a.as_install_progress()),
@@ -432,7 +432,7 @@ impl Model<PackageDataEntry> {
PackageDataEntryMatchModelRef::Error(_) => None,
}
}
pub fn as_install_progress_mut(&mut self) -> Option<&mut Model<Arc<InstallProgress>>> {
pub fn as_install_progress_mut(&mut self) -> Option<&mut Model<FullProgress>> {
match self.as_match_mut() {
PackageDataEntryMatchModelMut::Installing(a) => Some(a.as_install_progress_mut()),
PackageDataEntryMatchModelMut::Updating(a) => Some(a.as_install_progress_mut()),
@@ -459,6 +459,29 @@ pub struct InstalledPackageInfo {
pub current_dependents: CurrentDependents,
pub current_dependencies: CurrentDependencies,
pub interface_addresses: InterfaceAddressMap,
pub store: Value,
pub store_exposed_ui: Vec<ExposedUI>,
pub store_exposed_dependents: Vec<JsonPointer>,
}
#[derive(Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
pub struct ExposedDependent {
path: String,
title: String,
description: Option<String>,
masked: Option<bool>,
copyable: Option<bool>,
qr: Option<bool>,
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
pub struct ExposedUI {
path: Vec<JsonPointer>,
title: String,
description: Option<String>,
masked: Option<bool>,
copyable: Option<bool>,
qr: Option<bool>,
}
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
@@ -478,7 +501,6 @@ impl Map for CurrentDependents {
type Key = PackageId;
type Value = CurrentDependencyInfo;
}
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
pub struct CurrentDependencies(pub BTreeMap<PackageId, CurrentDependencyInfo>);
impl CurrentDependencies {
@@ -514,7 +536,7 @@ pub struct CurrentDependencyInfo {
pub health_checks: BTreeSet<HealthCheckId>,
}
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Default, Deserialize, Serialize)]
pub struct InterfaceAddressMap(pub BTreeMap<InterfaceId, InterfaceAddresses>);
impl Map for InterfaceAddressMap {
type Key = InterfaceId;

View File

@@ -1,22 +0,0 @@
use models::Version;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
pub fn get_packages(db: Peeked) -> Result<Vec<(PackageId, Version)>, Error> {
Ok(db
.as_package_data()
.keys()?
.into_iter()
.flat_map(|package_id| {
let version = db
.as_package_data()
.as_idx(&package_id)?
.as_manifest()
.as_version()
.de()
.ok()?;
Some((package_id, version))
})
.collect())
}

View File

@@ -2,8 +2,9 @@ use std::collections::BTreeMap;
use std::marker::PhantomData;
use std::panic::UnwindSafe;
pub use imbl_value::Value;
use patch_db::value::InternedString;
pub use patch_db::{HasModel, PatchDb, Value};
pub use patch_db::{HasModel, PatchDb};
use serde::de::DeserializeOwned;
use serde::Serialize;

View File

@@ -1,31 +1,26 @@
use std::collections::BTreeMap;
use std::time::Duration;
use color_eyre::eyre::eyre;
use clap::Parser;
use emver::VersionRange;
use models::OptionExt;
use rand::SeedableRng;
use rpc_toolkit::command;
use models::{OptionExt, PackageId};
use rpc_toolkit::{command, from_fn_async, Empty, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::config::action::ConfigRes;
use crate::config::spec::PackagePointerSpec;
use crate::config::{not_found, Config, ConfigSpec, ConfigureContext};
use crate::context::RpcContext;
use crate::config::{Config, ConfigSpec, ConfigureContext};
use crate::context::{CliContext, RpcContext};
use crate::db::model::{CurrentDependencies, Database};
use crate::prelude::*;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::s9pk::manifest::Manifest;
use crate::status::DependencyConfigErrors;
use crate::util::serde::display_serializable;
use crate::util::{display_none, Version};
use crate::volume::Volumes;
use crate::util::serde::HandlerExtSerde;
use crate::util::Version;
use crate::Error;
#[command(subcommands(configure))]
pub fn dependency() -> Result<(), Error> {
Ok(())
pub fn dependency() -> ParentHandler {
ParentHandler::new().subcommand("configure", configure())
}
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel)]
@@ -58,77 +53,41 @@ pub struct DepInfo {
pub requirement: DependencyRequirement,
pub description: Option<String>,
#[serde(default)]
pub config: Option<DependencyConfig>,
pub config: Option<Value>, // TODO: remove
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[model = "Model<Self>"]
pub struct DependencyConfig {
check: PackageProcedure,
auto_configure: PackageProcedure,
#[command(rename_all = "kebab-case")]
pub struct ConfigureParams {
#[arg(name = "dependent-id")]
dependent_id: PackageId,
#[arg(name = "dependency-id")]
dependency_id: PackageId,
}
impl DependencyConfig {
pub async fn check(
&self,
ctx: &RpcContext,
dependent_id: &PackageId,
dependent_version: &Version,
dependent_volumes: &Volumes,
dependency_id: &PackageId,
dependency_config: &Config,
) -> Result<Result<NoOutput, String>, Error> {
Ok(self
.check
.sandboxed(
ctx,
dependent_id,
dependent_version,
dependent_volumes,
Some(dependency_config),
None,
ProcedureName::Check(dependency_id.clone()),
)
.await?
.map_err(|(_, e)| e))
}
pub async fn auto_configure(
&self,
ctx: &RpcContext,
dependent_id: &PackageId,
dependent_version: &Version,
dependent_volumes: &Volumes,
old: &Config,
) -> Result<Config, Error> {
self.auto_configure
.sandboxed(
ctx,
dependent_id,
dependent_version,
dependent_volumes,
Some(old),
None,
ProcedureName::AutoConfig(dependent_id.clone()),
)
.await?
.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))
}
}
#[command(
subcommands(self(configure_impl(async)), configure_dry),
display(display_none)
)]
pub async fn configure(
#[arg(rename = "dependent-id")] dependent_id: PackageId,
#[arg(rename = "dependency-id")] dependency_id: PackageId,
) -> Result<(PackageId, PackageId), Error> {
Ok((dependent_id, dependency_id))
pub fn configure() -> ParentHandler<ConfigureParams> {
ParentHandler::new()
.root_handler(
from_fn_async(configure_impl)
.with_inherited(|params, _| params)
.no_cli(),
)
.subcommand(
"dry",
from_fn_async(configure_dry)
.with_inherited(|params, _| params)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
}
pub async fn configure_impl(
ctx: RpcContext,
(pkg_id, dep_id): (PackageId, PackageId),
_: Empty,
ConfigureParams {
dependent_id,
dependency_id,
}: ConfigureParams,
) -> Result<(), Error> {
let breakages = BTreeMap::new();
let overrides = Default::default();
@@ -136,7 +95,7 @@ pub async fn configure_impl(
old_config: _,
new_config,
spec: _,
} = configure_logic(ctx.clone(), (pkg_id, dep_id.clone())).await?;
} = configure_logic(ctx.clone(), (dependent_id, dependency_id.clone())).await?;
let configure_context = ConfigureContext {
breakages,
@@ -145,7 +104,18 @@ pub async fn configure_impl(
dry_run: false,
overrides,
};
crate::config::configure(&ctx, &dep_id, configure_context).await?;
ctx.services
.get(&dependency_id)
.await
.as_ref()
.ok_or_else(|| {
Error::new(
eyre!("There is no manager running for {dependency_id}"),
ErrorKind::Unknown,
)
})?
.configure(configure_context)
.await?;
Ok(())
}
@@ -157,90 +127,95 @@ pub struct ConfigDryRes {
pub spec: ConfigSpec,
}
#[command(rename = "dry", display(display_serializable))]
// #[command(rename = "dry", display(display_serializable))]
#[instrument(skip_all)]
pub async fn configure_dry(
#[context] ctx: RpcContext,
#[parent_data] (pkg_id, dependency_id): (PackageId, PackageId),
ctx: RpcContext,
_: Empty,
ConfigureParams {
dependent_id,
dependency_id,
}: ConfigureParams,
) -> Result<ConfigDryRes, Error> {
configure_logic(ctx, (pkg_id, dependency_id)).await
configure_logic(ctx, (dependent_id, dependency_id)).await
}
pub async fn configure_logic(
ctx: RpcContext,
(pkg_id, dependency_id): (PackageId, PackageId),
(dependent_id, dependency_id): (PackageId, PackageId),
) -> Result<ConfigDryRes, Error> {
let db = ctx.db.peek().await;
let pkg = db
.as_package_data()
.as_idx(&pkg_id)
.or_not_found(&pkg_id)?
.as_installed()
.or_not_found(&pkg_id)?;
let pkg_version = pkg.as_manifest().as_version().de()?;
let pkg_volumes = pkg.as_manifest().as_volumes().de()?;
let dependency = db
.as_package_data()
.as_idx(&dependency_id)
.or_not_found(&dependency_id)?
.as_installed()
.or_not_found(&dependency_id)?;
let dependency_config_action = dependency
.as_manifest()
.as_config()
.de()?
.ok_or_else(|| not_found!("Manifest Config"))?;
let dependency_version = dependency.as_manifest().as_version().de()?;
let dependency_volumes = dependency.as_manifest().as_volumes().de()?;
let dependency = pkg
.as_manifest()
.as_dependencies()
.as_idx(&dependency_id)
.or_not_found(&dependency_id)?;
// let db = ctx.db.peek().await;
// let pkg = db
// .as_package_data()
// .as_idx(&pkg_id)
// .or_not_found(&pkg_id)?
// .as_installed()
// .or_not_found(&pkg_id)?;
// let pkg_version = pkg.as_manifest().as_version().de()?;
// let pkg_volumes = pkg.as_manifest().as_volumes().de()?;
// let dependency = db
// .as_package_data()
// .as_idx(&dependency_id)
// .or_not_found(&dependency_id)?
// .as_installed()
// .or_not_found(&dependency_id)?;
// let dependency_config_action = dependency
// .as_manifest()
// .as_config()
// .de()?
// .ok_or_else(|| not_found!("Manifest Config"))?;
// let dependency_version = dependency.as_manifest().as_version().de()?;
// let dependency_volumes = dependency.as_manifest().as_volumes().de()?;
// let dependency = pkg
// .as_manifest()
// .as_dependencies()
// .as_idx(&dependency_id)
// .or_not_found(&dependency_id)?;
let ConfigRes {
config: maybe_config,
spec,
} = dependency_config_action
.get(
&ctx,
&dependency_id,
&dependency_version,
&dependency_volumes,
)
.await?;
// let ConfigRes {
// config: maybe_config,
// spec,
// } = dependency_config_action
// .get(
// &ctx,
// &dependency_id,
// &dependency_version,
// &dependency_volumes,
// )
// .await?;
let old_config = if let Some(config) = maybe_config {
config
} else {
spec.gen(
&mut rand::rngs::StdRng::from_entropy(),
&Some(Duration::new(10, 0)),
)?
};
// let old_config = if let Some(config) = maybe_config {
// config
// } else {
// spec.gen(
// &mut rand::rngs::StdRng::from_entropy(),
// &Some(Duration::new(10, 0)),
// )?
// };
let new_config = dependency
.as_config()
.de()?
.ok_or_else(|| not_found!("Config"))?
.auto_configure
.sandboxed(
&ctx,
&pkg_id,
&pkg_version,
&pkg_volumes,
Some(&old_config),
None,
ProcedureName::AutoConfig(dependency_id.clone()),
)
.await?
.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))?;
// let new_config = dependency
// .as_config()
// .de()?
// .ok_or_else(|| not_found!("Config"))?
// .auto_configure
// .sandboxed(
// &ctx,
// &pkg_id,
// &pkg_version,
// &pkg_volumes,
// Some(&old_config),
// None,
// ProcedureName::AutoConfig(dependency_id.clone()),
// )
// .await?
// .map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))?;
Ok(ConfigDryRes {
old_config,
new_config,
spec,
})
// Ok(ConfigDryRes {
// old_config,
// new_config,
// spec,
// })
todo!()
}
#[instrument(skip_all)]
@@ -324,36 +299,7 @@ pub async fn compute_dependency_config_errs(
.or_not_found(dependency)?
.config
{
if let Err(error) = cfg
.check(
ctx,
&manifest.id,
&manifest.version,
&manifest.volumes,
dependency,
&if let Some(config) = dependency_config.get(dependency) {
config.clone()
} else if let Some(manifest) = db
.as_package_data()
.as_idx(dependency)
.and_then(|pde| pde.as_installed())
.map(|i| i.as_manifest().de())
.transpose()?
{
if let Some(config) = &manifest.config {
config
.get(ctx, &manifest.id, &manifest.version, &manifest.volumes)
.await?
.config
.unwrap_or_default()
} else {
Config::default()
}
} else {
Config::default()
},
)
.await?
let error = todo!();
{
dependency_config_errs.insert(dependency.clone(), error);
}

View File

@@ -5,16 +5,13 @@ use std::path::Path;
use ed25519::pkcs8::EncodePrivateKey;
use ed25519::PublicKeyBytes;
use ed25519_dalek::{SigningKey, VerifyingKey};
use rpc_toolkit::command;
use tracing::instrument;
use crate::context::SdkContext;
use crate::util::display_none;
use crate::context::CliContext;
use crate::{Error, ResultExt};
#[command(cli_only, blocking, display(display_none))]
#[instrument(skip_all)]
pub fn init(#[context] ctx: SdkContext) -> Result<(), Error> {
pub fn init(ctx: CliContext) -> Result<(), Error> {
if !ctx.developer_key_path.exists() {
let parent = ctx.developer_key_path.parent().unwrap_or(Path::new("/"));
if !parent.exists() {
@@ -48,8 +45,3 @@ pub fn init(#[context] ctx: SdkContext) -> Result<(), Error> {
}
Ok(())
}
#[command(subcommands(crate::s9pk::verify, crate::config::verify_spec))]
pub fn verify() -> Result<(), Error> {
Ok(())
}

View File

@@ -1,44 +1,70 @@
use std::path::Path;
use std::sync::Arc;
use rpc_toolkit::command;
use clap::Parser;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, from_fn, from_fn_async, AnyContext, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use crate::context::DiagnosticContext;
use crate::disk::repair;
use crate::context::{CliContext, DiagnosticContext};
use crate::init::SYSTEM_REBUILD_PATH;
use crate::logs::{fetch_logs, LogResponse, LogSource};
use crate::shutdown::Shutdown;
use crate::util::display_none;
use crate::Error;
#[command(subcommands(error, logs, exit, restart, forget_disk, disk, rebuild))]
pub fn diagnostic() -> Result<(), Error> {
Ok(())
pub fn diagnostic() -> ParentHandler {
ParentHandler::new()
.subcommand("error", from_fn(error).with_remote_cli::<CliContext>())
.subcommand("logs", from_fn_async(logs).no_cli())
.subcommand(
"exit",
from_fn(exit).no_display().with_remote_cli::<CliContext>(),
)
.subcommand(
"restart",
from_fn(restart)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand("disk", disk())
.subcommand(
"rebuild",
from_fn_async(rebuild)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
#[command]
pub fn error(#[context] ctx: DiagnosticContext) -> Result<Arc<RpcError>, Error> {
// #[command]
pub fn error(ctx: DiagnosticContext) -> Result<Arc<RpcError>, Error> {
Ok(ctx.error.clone())
}
#[command(rpc_only)]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct LogsParams {
limit: Option<usize>,
cursor: Option<String>,
before: bool,
}
pub async fn logs(
#[arg] limit: Option<usize>,
#[arg] cursor: Option<String>,
#[arg] before: bool,
_: AnyContext,
LogsParams {
limit,
cursor,
before,
}: LogsParams,
) -> Result<LogResponse, Error> {
Ok(fetch_logs(LogSource::System, limit, cursor, before).await?)
}
#[command(display(display_none))]
pub fn exit(#[context] ctx: DiagnosticContext) -> Result<(), Error> {
pub fn exit(ctx: DiagnosticContext) -> Result<(), Error> {
ctx.shutdown.send(None).expect("receiver dropped");
Ok(())
}
#[command(display(display_none))]
pub fn restart(#[context] ctx: DiagnosticContext) -> Result<(), Error> {
pub fn restart(ctx: DiagnosticContext) -> Result<(), Error> {
ctx.shutdown
.send(Some(Shutdown {
export_args: ctx
@@ -50,20 +76,21 @@ pub fn restart(#[context] ctx: DiagnosticContext) -> Result<(), Error> {
.expect("receiver dropped");
Ok(())
}
#[command(display(display_none))]
pub async fn rebuild(#[context] ctx: DiagnosticContext) -> Result<(), Error> {
pub async fn rebuild(ctx: DiagnosticContext) -> Result<(), Error> {
tokio::fs::write(SYSTEM_REBUILD_PATH, b"").await?;
restart(ctx)
}
#[command(subcommands(forget_disk, repair))]
pub fn disk() -> Result<(), Error> {
Ok(())
pub fn disk() -> ParentHandler {
ParentHandler::new().subcommand(
"forget",
from_fn_async(forget_disk)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
#[command(rename = "forget", display(display_none))]
pub async fn forget_disk() -> Result<(), Error> {
pub async fn forget_disk(_: AnyContext) -> Result<(), Error> {
let disk_guid = Path::new("/media/embassy/config/disk.guid");
if tokio::fs::metadata(disk_guid).await.is_ok() {
tokio::fs::remove_file(disk_guid).await?;

View File

@@ -7,8 +7,8 @@ use tracing::instrument;
use super::fsck::{RepairStrategy, RequiresReboot};
use super::util::pvscan;
use crate::disk::mount::filesystem::block_dev::mount;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::{FileSystem, ReadWrite};
use crate::disk::mount::util::unmount;
use crate::util::Invoke;
use crate::{Error, ErrorKind, ResultExt};
@@ -142,7 +142,9 @@ pub async fn create_fs<P: AsRef<Path>>(
.arg(&blockdev_path)
.invoke(crate::ErrorKind::DiskManagement)
.await?;
mount(&blockdev_path, datadir.as_ref().join(name), ReadWrite).await?;
BlockDev::new(&blockdev_path)
.mount(datadir.as_ref().join(name), ReadWrite)
.await?;
Ok(())
}
@@ -318,7 +320,9 @@ pub async fn mount_fs<P: AsRef<Path>>(
tokio::fs::rename(&tmp_luks_bak, &luks_bak).await?;
}
mount(&blockdev_path, datadir.as_ref().join(name), ReadWrite).await?;
BlockDev::new(&blockdev_path)
.mount(datadir.as_ref().join(name), ReadWrite)
.await?;
Ok(reboot)
}

View File

@@ -1,13 +1,11 @@
use std::path::{Path, PathBuf};
use clap::ArgMatches;
use rpc_toolkit::command;
use rpc_toolkit::{from_fn_async, AnyContext, Empty, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::disk::util::DiskInfo;
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat};
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat};
use crate::Error;
pub mod fsck;
@@ -42,16 +40,30 @@ impl OsPartitionInfo {
}
}
#[command(subcommands(list, repair))]
pub fn disk() -> Result<(), Error> {
Ok(())
pub fn disk() -> ParentHandler {
ParentHandler::new()
.subcommand(
"list",
from_fn_async(list)
.with_display_serializable()
.with_custom_display_fn::<AnyContext, _>(|handle, result| {
Ok(display_disk_info(handle.params, result))
})
.with_remote_cli::<CliContext>(),
)
.subcommand(
"repair",
from_fn_async(repair)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
fn display_disk_info(info: Vec<DiskInfo>, matches: &ArgMatches) {
fn display_disk_info(params: WithIoFormat<Empty>, args: Vec<DiskInfo>) {
use prettytable::*;
if matches.is_present("format") {
return display_serializable(info, matches);
if let Some(format) = params.format {
return display_serializable(format, args);
}
let mut table = Table::new();
@@ -60,9 +72,9 @@ fn display_disk_info(info: Vec<DiskInfo>, matches: &ArgMatches) {
"LABEL",
"CAPACITY",
"USED",
"EMBASSY OS VERSION"
"STARTOS VERSION"
]);
for disk in info {
for disk in args {
let row = row![
disk.logicalname.display(),
"N/A",
@@ -101,17 +113,11 @@ fn display_disk_info(info: Vec<DiskInfo>, matches: &ArgMatches) {
table.print_tty(false).unwrap();
}
#[command(display(display_disk_info))]
pub async fn list(
#[context] ctx: RpcContext,
#[allow(unused_variables)]
#[arg]
format: Option<IoFormat>,
) -> Result<Vec<DiskInfo>, Error> {
// #[command(display(display_disk_info))]
pub async fn list(ctx: RpcContext, _: Empty) -> Result<Vec<DiskInfo>, Error> {
crate::disk::util::list(&ctx.os_partitions).await
}
#[command(display(display_none))]
pub async fn repair() -> Result<(), Error> {
tokio::fs::write(REPAIR_DISK_PATH, b"").await?;
Ok(())

Some files were not shown because too many files have changed in this diff Show More