Compare commits

..

29 Commits

Author SHA1 Message Date
Aiden McClelland
2fbaaebf44 Bugfixes for alpha.12 (#3049)
* squashfs-wip

* sdk fixes

* misc fixes

* bump sdk

* Include StartTunnel installation command

Added installation instructions for StartTunnel.

* CA instead of leaf for StartTunnel (#3046)

* updated docs for CA instead of cert

* generate ca instead of self-signed in start-tunnel

* Fix formatting in START-TUNNEL.md installation instructions

* Fix formatting in START-TUNNEL.md

* fix infinite loop

* add success message to install

* hide loopback and bridge gateways

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>

* prevent gateways from getting stuck empty

* fix set-password

* misc networking fixes

* build and efi fixes

* efi fixes

* alpha.13

* remove cross

* fix tests

* provide path to upgrade

* fix networkmanager issues

* remove squashfs before creating

---------

Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
2025-11-15 22:33:03 -07:00
StuPleb
edb916338c minor typos and grammar (#3047)
* minor typos and grammar

* added missing word - compute
2025-11-14 12:48:41 -07:00
Aiden McClelland
f7e947d37d Fix installation command for StartTunnel (#3048) 2025-11-14 12:42:05 -07:00
Aiden McClelland
a9e3d1ed75 Revise StartTunnel installation and update commands
Updated installation and update instructions for StartTunnel.
2025-11-14 12:39:53 -07:00
Matt Hill
ce97827c42 CA instead of leaf for StartTunnel (#3046)
* updated docs for CA instead of cert

* generate ca instead of self-signed in start-tunnel

* Fix formatting in START-TUNNEL.md installation instructions

* Fix formatting in START-TUNNEL.md

* fix infinite loop

* add success message to install

* hide loopback and bridge gateways

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-11-10 14:15:58 -07:00
Aiden McClelland
3efec07338 Include StartTunnel installation command
Added installation instructions for StartTunnel.
2025-11-07 20:00:31 +00:00
Aiden McClelland
68f401bfa3 Feature/start tunnel (#3037)
* fix live-build resolv.conf

* improved debuggability

* wip: start-tunnel

* fixes for trixie and tor

* non-free-firmware on trixie

* wip

* web server WIP

* wip: tls refactor

* FE patchdb, mocks, and most endpoints

* fix editing records and patch mocks

* refactor complete

* finish api

* build and formatter update

* minor change toi viewing addresses and fix build

* fixes

* more providers

* endpoint for getting config

* fix tests

* api fixes

* wip: separate port forward controller into parts

* simplify iptables rules

* bump sdk

* misc fixes

* predict next subnet and ip, use wan ips, and form validation

* refactor: break big components apart and address todos (#3043)

* refactor: break big components apart and address todos

* starttunnel readme, fix pf mocks, fix adding tor domain in startos

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* better tui

* tui tweaks

* fix: address comments

* better regex for subnet

* fixes

* better validation

* handle rpc errors

* build fixes

* fix: address comments (#3044)

* fix: address comments

* fix unread notification mocks

* fix row click for notification

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix raspi build

* fix build

* fix build

* fix build

* fix build

* try to fix build

* fix tests

* fix tests

* fix rsync tests

* delete useless effectful test

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
2025-11-07 10:12:05 +00:00
Matt Hill
1ea525feaa make textarea rows configurable (#3042)
* make textarea rows configurable

* add comments

* better defaults
2025-10-31 11:46:49 -06:00
Alex Inkin
57c4a7527e fix: make CPU meter not go to 11 (#3038) 2025-10-30 16:13:39 -06:00
Aiden McClelland
5aa9c045e1 fix live-build resolv.conf (#3035)
* fix live-build resolv.conf

* improved debuggability
2025-09-24 22:44:25 -06:00
Matt Hill
6f1900f3bb limit adding gateway to StartTunnel, better copy around Tor SSL (#3033)
* limit adding gateway to StartTunnel, better copy around Tor SSL

* properly differentiate ssl

* exclude disconnected gateways

* better error handling

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-09-24 13:22:26 -06:00
Aiden McClelland
bc62de795e bugfixes for alpha.10 (#3032)
* bugfixes for alpha.10

* bump raspi kernel

* rpi kernel bump

* alpha.11
2025-09-23 22:42:17 +00:00
Alex Inkin
c62ca4b183 fix: make long dropdown options wrap (#3031) 2025-09-21 06:06:33 -06:00
Alex Inkin
876e5bc683 fix: fix overflowing interface table (#3027) 2025-09-20 07:10:58 -06:00
Alex Inkin
b99f3b73cd fix: make logs page take up all space (#3030) 2025-09-20 07:10:28 -06:00
Matt Hill
7eecf29449 fix dep error display, show starting if any health check starting, show disabled health check message, remove loader from service list, animated dots, better color (#3025)
* refector addresses to not need gateways array

* fix dep error display, show starting if any health check starting, show disabled health check message, remove loader from service list, animated dots, better color

* fix: fix action results textfields

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-09-17 10:32:20 -06:00
Mariusz Kogen
1d331d7810 Fix file permissions for developer key and auth cookie (#3024)
* fix permissions

* include read for group
2025-09-16 09:09:33 -06:00
Aiden McClelland
68414678d8 sdk updates; beta.39 (#3022)
* sdk updates; beta.39

* beta.40
2025-09-11 15:47:48 -06:00
Aiden McClelland
2f6b9dac26 Bugfix/dns recursion (#3023)
* fix dns recursion and localhost

* additional fix
2025-09-11 15:47:38 -06:00
Aiden McClelland
d1812d875b fix dns recursion and localhost (#3021) 2025-09-11 12:35:12 -06:00
Aiden McClelland
723dea100f add more gateway info to hostnameInfo (#3019) 2025-09-10 12:16:35 -06:00
Matt Hill
c4419ed31f show correct gateway name when adding public domain 2025-09-10 09:57:39 -06:00
Matt Hill
754ab86e51 only show http for tor if protocol is http 2025-09-10 09:36:03 -06:00
Mariusz Kogen
04dab532cd Motd Redesign - Visual and Structural Upgrade (#3018)
New 040 motd
2025-09-10 06:36:27 +00:00
Matt Hill
add01ebc68 Gateways, domains, and new service interface (#3001)
* add support for inbound proxies

* backend changes

* fix file type

* proxy -> tunnel, implement backend apis

* wip start-tunneld

* add domains and gateways, remove routers, fix docs links

* dont show hidden actions

* show and test dns

* edit instead of chnage acme and change gateway

* refactor: domains page

* refactor: gateways page

* domains and acme refactor

* certificate authorities

* refactor public/private gateways

* fix fe types

* domains mostly finished

* refactor: add file control to form service

* add ip util to sdk

* domains api + migration

* start service interface page, WIP

* different options for clearnet domains

* refactor: styles for interfaces page

* minor

* better placeholder for no addresses

* start sorting addresses

* best address logic

* comments

* fix unnecessary export

* MVP of service interface page

* domains preferred

* fix: address comments

* only translations left

* wip: start-tunnel & fix build

* forms for adding domain, rework things based on new ideas

* fix: dns testing

* public domain, max width, descriptions for dns

* nix StartOS domains, implement public and private domains at interface scope

* restart tor instead of reset

* better icon for restart tor

* dns

* fix sort functions for public and private domains

* with todos

* update types

* clean up tech debt, bump dependencies

* revert to ts-rs v9

* fix all types

* fix dns form

* add missing translations

* it builds

* fix: comments (#3009)

* fix: comments

* undo default

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix: refactor legacy components (#3010)

* fix: comments

* fix: refactor legacy components

* remove default again

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* more translations

* wip

* fix deadlock

* coukd work

* simple renaming

* placeholder for empty service interfaces table

* honor hidden form values

* remove logs

* reason instead of description

* fix dns

* misc fixes

* implement toggling gateways for service interface

* fix showing dns records

* move status column in service list

* remove unnecessary truthy check

* refactor: refactor forms components and remove legacy Taiga UI package (#3012)

* handle wh file uploads

* wip: debugging tor

* socks5 proxy working

* refactor: fix multiple comments (#3013)

* refactor: fix multiple comments

* styling changes, add documentation to sidebar

* translations for dns page

* refactor: subtle colors

* rearrange service page

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix file_stream and remove non-terminating test

* clean  up logs

* support for sccache

* fix gha sccache

* more marketplace translations

* install wizard clarity

* stub hostnameInfo in migration

* fix address info after setup, fix styling on SI page, new 040 release notes

* remove tor logs from os

* misc fixes

* reset tor still not functioning...

* update ts

* minor styling and wording

* chore: some fixes (#3015)

* fix gateway renames

* different handling for public domains

* styling fixes

* whole navbar should not be clickable on service show page

* timeout getState request

* remove links from changelog

* misc fixes from pairing

* use custom name for gateway in more places

* fix dns parsing

* closes #3003

* closes #2999

* chore: some fixes (#3017)

* small copy change

* revert hardcoded error for testing

* dont require port forward if gateway is public

* use old wan ip when not available

* fix .const hanging on undefined

* fix test

* fix doc test

* fix renames

* update deps

* allow specifying dependency metadata directly

* temporarily make dependencies not cliackable in marketplace listings

* fix socks bind

* fix test

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: waterplea <alexander@inkin.ru>
2025-09-10 03:43:51 +00:00
Mariusz Kogen
1cc9a1a30b build(cli): harden build-cli.sh (zig check, env defaults, GIT_HASH) (#3016) 2025-09-09 14:18:52 -06:00
Dominion5254
92a1de7500 remove entire service package directory on hard uninstall (#3007)
* remove entire service package directory on hard uninstall

* fix package path
2025-08-12 15:46:01 -06:00
Alex Inkin
a6fedcff80 fix: extract correct manifest in updating state (#3004) 2025-08-01 22:51:38 -06:00
Alex Inkin
55eb999305 fix: update notifications design (#3000) 2025-07-29 22:17:59 -06:00
650 changed files with 33930 additions and 24847 deletions

View File

@@ -28,6 +28,7 @@ on:
- aarch64 - aarch64
- aarch64-nonfree - aarch64-nonfree
- raspberrypi - raspberrypi
- riscv64
deploy: deploy:
type: choice type: choice
description: Deploy description: Deploy
@@ -45,7 +46,7 @@ on:
- next/* - next/*
env: env:
NODEJS_VERSION: "22.17.1" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}' ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs: jobs:
@@ -62,6 +63,7 @@ jobs:
"aarch64": ["aarch64"], "aarch64": ["aarch64"],
"aarch64-nonfree": ["aarch64"], "aarch64-nonfree": ["aarch64"],
"raspberrypi": ["aarch64"], "raspberrypi": ["aarch64"],
"riscv64": ["riscv64"],
"ALL": ["x86_64", "aarch64"] "ALL": ["x86_64", "aarch64"]
}')[github.event.inputs.platform || 'ALL'] }')[github.event.inputs.platform || 'ALL']
}} }}
@@ -93,8 +95,18 @@ jobs:
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Make - name: Make
run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar
env:
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v4
with: with:
@@ -129,6 +141,7 @@ jobs:
"aarch64": "buildjet-8vcpu-ubuntu-2204-arm", "aarch64": "buildjet-8vcpu-ubuntu-2204-arm",
"aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm", "aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm",
"raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm", "raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm",
"riscv64": "buildjet-8vcpu-ubuntu-2204",
}')[matrix.platform] }')[matrix.platform]
) )
)[github.event.inputs.runner == 'fast'] )[github.event.inputs.runner == 'fast']
@@ -142,6 +155,7 @@ jobs:
"aarch64": "aarch64", "aarch64": "aarch64",
"aarch64-nonfree": "aarch64", "aarch64-nonfree": "aarch64",
"raspberrypi": "aarch64", "raspberrypi": "aarch64",
"riscv64": "riscv64",
}')[matrix.platform] }')[matrix.platform]
}} }}
steps: steps:

View File

@@ -11,7 +11,7 @@ on:
- next/* - next/*
env: env:
NODEJS_VERSION: "22.17.1" NODEJS_VERSION: "24.11.0"
ENVIRONMENT: dev-unstable ENVIRONMENT: dev-unstable
jobs: jobs:

3
.gitignore vendored
View File

@@ -1,8 +1,5 @@
.DS_Store .DS_Store
.idea .idea
system-images/binfmt/binfmt.tar
system-images/compat/compat.tar
system-images/util/util.tar
/*.img /*.img
/*.img.gz /*.img.gz
/*.img.xz /*.img.xz

View File

@@ -25,9 +25,9 @@ docker buildx create --use
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # proceed with default installation curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # proceed with default installation
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
source ~/.bashrc source ~/.bashrc
nvm install 22 nvm install 24
nvm use 22 nvm use 24
nvm alias default 22 # this prevents your machine from reverting back to another version nvm alias default 24 # this prevents your machine from reverting back to another version
``` ```
## Cloning the repository ## Cloning the repository

187
Makefile
View File

@@ -1,31 +1,45 @@
ls-files = $(shell git ls-files --cached --others --exclude-standard $1)
PROFILE = release
PLATFORM_FILE := $(shell ./check-platform.sh) PLATFORM_FILE := $(shell ./check-platform.sh)
ENVIRONMENT_FILE := $(shell ./check-environment.sh) ENVIRONMENT_FILE := $(shell ./check-environment.sh)
GIT_HASH_FILE := $(shell ./check-git-hash.sh) GIT_HASH_FILE := $(shell ./check-git-hash.sh)
VERSION_FILE := $(shell ./check-version.sh) VERSION_FILE := $(shell ./check-version.sh)
BASENAME := $(shell ./basename.sh) BASENAME := $(shell PROJECT=startos ./basename.sh)
PLATFORM := $(shell if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi) PLATFORM := $(shell if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi) ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi)
RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi)
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./basename.sh)
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./basename.sh)
IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi) IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi)
WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html web/dist/raw/install-wizard/index.html WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html web/dist/raw/install-wizard/index.html
COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html web/dist/static/install-wizard/index.html COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html web/dist/static/install-wizard/index.html
FIRMWARE_ROMS := ./firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json) FIRMWARE_ROMS := ./firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json)
BUILD_SRC := $(shell git ls-files build) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS) BUILD_SRC := $(call ls-files, build) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
DEBIAN_SRC := $(shell git ls-files debian/) IMAGE_RECIPE_SRC := $(call ls-files, image-recipe/)
IMAGE_RECIPE_SRC := $(shell git ls-files image-recipe/)
STARTD_SRC := core/startos/startd.service $(BUILD_SRC) STARTD_SRC := core/startos/startd.service $(BUILD_SRC)
COMPAT_SRC := $(shell git ls-files system-images/compat/) CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE)
UTILS_SRC := $(shell git ls-files system-images/utils/) WEB_SHARED_SRC := $(call ls-files, web/projects/shared) $(call ls-files, web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json
BINFMT_SRC := $(shell git ls-files system-images/binfmt/) WEB_UI_SRC := $(call ls-files, web/projects/ui)
CORE_SRC := $(shell git ls-files core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE) WEB_SETUP_WIZARD_SRC := $(call ls-files, web/projects/setup-wizard)
WEB_SHARED_SRC := $(shell git ls-files web/projects/shared) $(shell git ls-files web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json WEB_INSTALL_WIZARD_SRC := $(call ls-files, web/projects/install-wizard)
WEB_UI_SRC := $(shell git ls-files web/projects/ui) WEB_START_TUNNEL_SRC := $(call ls-files, web/projects/start-tunnel)
WEB_SETUP_WIZARD_SRC := $(shell git ls-files web/projects/setup-wizard)
WEB_INSTALL_WIZARD_SRC := $(shell git ls-files web/projects/install-wizard)
PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client) PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client)
GZIP_BIN := $(shell which pigz || which gzip) GZIP_BIN := $(shell which pigz || which gzip)
TAR_BIN := $(shell which gtar || which tar) TAR_BIN := $(shell which gtar || which tar)
COMPILED_TARGETS := core/target/$(ARCH)-unknown-linux-musl/release/startbox core/target/$(ARCH)-unknown-linux-musl/release/containerbox system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar container-runtime/rootfs.$(ARCH).squashfs COMPILED_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox container-runtime/rootfs.$(ARCH).squashfs
ALL_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; fi) $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then echo cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console; fi') $(PLATFORM_FILE) STARTOS_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs $(PLATFORM_FILE) \
$(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then \
echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; \
fi) \
$(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then \
echo cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph; \
fi') \
$(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]; then \
echo cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console; \
fi')
REGISTRY_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox core/startos/start-registryd.service
TUNNEL_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service
REBUILD_TYPES = 1 REBUILD_TYPES = 1
ifeq ($(REMOTE),) ifeq ($(REMOTE),)
@@ -49,18 +63,16 @@ endif
.DELETE_ON_ERROR: .DELETE_ON_ERROR:
.PHONY: all metadata install clean format cli uis ui reflash deb $(IMAGE_TYPE) squashfs wormhole wormhole-deb test test-core test-sdk test-container-runtime registry .PHONY: all metadata install clean format cli uis ui reflash deb $(IMAGE_TYPE) squashfs wormhole wormhole-deb test test-core test-sdk test-container-runtime registry install-registry tunnel install-tunnel
all: $(ALL_TARGETS) all: $(STARTOS_TARGETS)
touch: touch:
touch $(ALL_TARGETS) touch $(STARTOS_TARGETS)
metadata: $(VERSION_FILE) $(PLATFORM_FILE) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) metadata: $(VERSION_FILE) $(PLATFORM_FILE) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE)
clean: clean:
rm -f system-images/**/*.tar
rm -rf system-images/compat/target
rm -rf core/target rm -rf core/target
rm -rf core/startos/bindings rm -rf core/startos/bindings
rm -rf web/.angular rm -rf web/.angular
@@ -95,44 +107,83 @@ test: | test-core test-sdk test-container-runtime
test-core: $(CORE_SRC) $(ENVIRONMENT_FILE) test-core: $(CORE_SRC) $(ENVIRONMENT_FILE)
./core/run-tests.sh ./core/run-tests.sh
test-sdk: $(shell git ls-files sdk) sdk/base/lib/osBindings/index.ts test-sdk: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
cd sdk && make test cd sdk && make test
test-container-runtime: container-runtime/node_modules/.package-lock.json $(shell git ls-files container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json test-container-runtime: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
cd container-runtime && npm test cd container-runtime && npm test
cli: cli:
cd core && ./install-cli.sh ./core/install-cli.sh
registry: registry: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox
cd core && ./build-registrybox.sh
install-registry: $(REGISTRY_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox,$(DESTDIR)/usr/bin/start-registrybox)
$(call ln,/usr/bin/start-registrybox,$(DESTDIR)/usr/bin/start-registryd)
$(call ln,/usr/bin/start-registrybox,$(DESTDIR)/usr/bin/start-registry)
$(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/start-registryd.service,$(DESTDIR)/lib/systemd/system/start-registryd.service)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-registrybox.sh
tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox
install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service
$(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox,$(DESTDIR)/usr/bin/start-tunnelbox)
$(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunneld)
$(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunnel)
$(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/start-tunneld.service,$(DESTDIR)/lib/systemd/system/start-tunneld.service)
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-tunnelbox.sh
deb: results/$(BASENAME).deb deb: results/$(BASENAME).deb
debian/control: build/lib/depends build/lib/conflicts results/$(BASENAME).deb: dpkg-build.sh $(call ls-files,debian/startos) $(STARTOS_TARGETS)
./debuild/control.sh
results/$(BASENAME).deb: dpkg-build.sh $(DEBIAN_SRC) $(ALL_TARGETS)
PLATFORM=$(PLATFORM) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh PLATFORM=$(PLATFORM) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh
registry-deb: results/$(REGISTRY_BASENAME).deb
results/$(REGISTRY_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS)
PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh
tunnel-deb: results/$(TUNNEL_BASENAME).deb
results/$(TUNNEL_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-tunnel) $(TUNNEL_TARGETS)
PROJECT=start-tunnel PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=wireguard-tools,iptables,conntrack ./build/os-compat/run-compat.sh ./dpkg-build.sh
$(IMAGE_TYPE): results/$(BASENAME).$(IMAGE_TYPE) $(IMAGE_TYPE): results/$(BASENAME).$(IMAGE_TYPE)
squashfs: results/$(BASENAME).squashfs squashfs: results/$(BASENAME).squashfs
results/$(BASENAME).$(IMAGE_TYPE) results/$(BASENAME).squashfs: $(IMAGE_RECIPE_SRC) results/$(BASENAME).deb results/$(BASENAME).$(IMAGE_TYPE) results/$(BASENAME).squashfs: $(IMAGE_RECIPE_SRC) results/$(BASENAME).deb
REQUIRES=debian ./build/os-compat/run-compat.sh ./image-recipe/run-local-build.sh "results/$(BASENAME).deb" ./image-recipe/run-local-build.sh "results/$(BASENAME).deb"
# For creating os images. DO NOT USE # For creating os images. DO NOT USE
install: $(ALL_TARGETS) install: $(STARTOS_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin) $(call mkdir,$(DESTDIR)/usr/bin)
$(call mkdir,$(DESTDIR)/usr/sbin) $(call mkdir,$(DESTDIR)/usr/sbin)
$(call cp,core/target/$(ARCH)-unknown-linux-musl/release/startbox,$(DESTDIR)/usr/bin/startbox) $(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox,$(DESTDIR)/usr/bin/startbox)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd) $(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli) $(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-sdk)
if [ "$(PLATFORM)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-musl/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi if [ "$(PLATFORM)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-musl/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then $(call cp,cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); fi if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then \
$(call cp,cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs,$(DESTDIR)/usr/bin/startos-backup-fs) $(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph,$(DESTDIR)/usr/bin/flamegraph); \
fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]'; then \
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); \
fi
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs,$(DESTDIR)/usr/bin/startos-backup-fs)
$(call ln,/usr/bin/startos-backup-fs,$(DESTDIR)/usr/sbin/mount.backup-fs) $(call ln,/usr/bin/startos-backup-fs,$(DESTDIR)/usr/sbin/mount.backup-fs)
$(call mkdir,$(DESTDIR)/lib/systemd/system) $(call mkdir,$(DESTDIR)/lib/systemd/system)
@@ -149,13 +200,9 @@ install: $(ALL_TARGETS)
$(call cp,GIT_HASH.txt,$(DESTDIR)/usr/lib/startos/GIT_HASH.txt) $(call cp,GIT_HASH.txt,$(DESTDIR)/usr/lib/startos/GIT_HASH.txt)
$(call cp,VERSION.txt,$(DESTDIR)/usr/lib/startos/VERSION.txt) $(call cp,VERSION.txt,$(DESTDIR)/usr/lib/startos/VERSION.txt)
$(call mkdir,$(DESTDIR)/usr/lib/startos/system-images)
$(call cp,system-images/compat/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/compat.tar)
$(call cp,system-images/utils/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/utils.tar)
$(call cp,firmware/$(PLATFORM),$(DESTDIR)/usr/lib/startos/firmware) $(call cp,firmware/$(PLATFORM),$(DESTDIR)/usr/lib/startos/firmware)
update-overlay: $(ALL_TARGETS) update-overlay: $(STARTOS_TARGETS)
@echo "\033[33m!!! THIS WILL ONLY REFLASH YOUR DEVICE IN MEMORY !!!\033[0m" @echo "\033[33m!!! THIS WILL ONLY REFLASH YOUR DEVICE IN MEMORY !!!\033[0m"
@echo "\033[33mALL CHANGES WILL BE REVERTED IF YOU RESTART THE DEVICE\033[0m" @echo "\033[33mALL CHANGES WILL BE REVERTED IF YOU RESTART THE DEVICE\033[0m"
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@@ -164,10 +211,10 @@ update-overlay: $(ALL_TARGETS)
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) PLATFORM=$(PLATFORM) $(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) PLATFORM=$(PLATFORM)
$(call ssh,"sudo systemctl start startd") $(call ssh,"sudo systemctl start startd")
wormhole: core/target/$(ARCH)-unknown-linux-musl/release/startbox wormhole: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
@echo "Paste the following command into the shell of your StartOS server:" @echo "Paste the following command into the shell of your StartOS server:"
@echo @echo
@wormhole send core/target/$(ARCH)-unknown-linux-musl/release/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }' @wormhole send core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
wormhole-deb: results/$(BASENAME).deb wormhole-deb: results/$(BASENAME).deb
@echo "Paste the following command into the shell of your StartOS server:" @echo "Paste the following command into the shell of your StartOS server:"
@@ -179,18 +226,18 @@ wormhole-squashfs: results/$(BASENAME).squashfs
$(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}')) $(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}'))
@echo "Paste the following command into the shell of your StartOS server:" @echo "Paste the following command into the shell of your StartOS server:"
@echo @echo
@wormhole send results/$(BASENAME).squashfs 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo sh -c '"'"'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE) && /usr/lib/startos/scripts/prune-boot && cd /media/startos/images && wormhole receive --accept-file %s && CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/use-img ./$(BASENAME).squashfs'"'"'\n", $$3 }' @wormhole send results/$(BASENAME).squashfs 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo sh -c '"'"'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE) && /usr/lib/startos/scripts/prune-boot && cd /media/startos/images && wormhole receive --accept-file %s && CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/upgrade ./$(BASENAME).squashfs'"'"'\n", $$3 }'
update: $(ALL_TARGETS) update: $(STARTOS_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create') $(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) DESTDIR=/media/startos/next PLATFORM=$(PLATFORM) $(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) DESTDIR=/media/startos/next PLATFORM=$(PLATFORM)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y $(shell cat ./build/lib/depends)"') $(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y $(shell cat ./build/lib/depends)"')
update-startbox: core/target/$(ARCH)-unknown-linux-musl/release/startbox # only update binary (faster than full update) update-startbox: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox # only update binary (faster than full update)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create') $(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(call cp,core/target/$(ARCH)-unknown-linux-musl/release/startbox,/media/startos/next/usr/bin/startbox) $(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox,/media/startos/next/usr/bin/startbox)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync true') $(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync true')
update-deb: results/$(BASENAME).deb # better than update, but only available from debian update-deb: results/$(BASENAME).deb # better than update, but only available from debian
@@ -207,9 +254,9 @@ update-squashfs: results/$(BASENAME).squashfs
$(call ssh,'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE)') $(call ssh,'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE)')
$(call ssh,'/usr/lib/startos/scripts/prune-boot') $(call ssh,'/usr/lib/startos/scripts/prune-boot')
$(call cp,results/$(BASENAME).squashfs,/media/startos/images/next.rootfs) $(call cp,results/$(BASENAME).squashfs,/media/startos/images/next.rootfs)
$(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/use-img /media/startos/images/next.rootfs') $(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/upgrade /media/startos/images/next.rootfs')
emulate-reflash: $(ALL_TARGETS) emulate-reflash: $(STARTOS_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi @if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create') $(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) DESTDIR=/media/startos/next PLATFORM=$(PLATFORM) $(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) DESTDIR=/media/startos/next PLATFORM=$(PLATFORM)
@@ -235,51 +282,42 @@ sdk/base/lib/osBindings/index.ts: $(shell if [ "$(REBUILD_TYPES)" -ne 0 ]; then
rsync -ac --delete core/startos/bindings/ sdk/base/lib/osBindings/ rsync -ac --delete core/startos/bindings/ sdk/base/lib/osBindings/
touch sdk/base/lib/osBindings/index.ts touch sdk/base/lib/osBindings/index.ts
core/startos/bindings/index.ts: $(shell git ls-files core) $(ENVIRONMENT_FILE) core/startos/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
rm -rf core/startos/bindings rm -rf core/startos/bindings
./core/build-ts.sh ./core/build-ts.sh
ls core/startos/bindings/*.ts | sed 's/core\/startos\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/startos/bindings/index.ts ls core/startos/bindings/*.ts | sed 's/core\/startos\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/startos/bindings/index.ts
npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/startos/bindings/*.ts npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/startos/bindings/*.ts
touch core/startos/bindings/index.ts touch core/startos/bindings/index.ts
sdk/dist/package.json sdk/baseDist/package.json: $(shell git ls-files sdk) sdk/base/lib/osBindings/index.ts sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
(cd sdk && make bundle) (cd sdk && make bundle)
touch sdk/dist/package.json touch sdk/dist/package.json
touch sdk/baseDist/package.json touch sdk/baseDist/package.json
# TODO: make container-runtime its own makefile? # TODO: make container-runtime its own makefile?
container-runtime/dist/index.js: container-runtime/node_modules/.package-lock.json $(shell git ls-files container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json container-runtime/dist/index.js: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
npm --prefix container-runtime run build npm --prefix container-runtime run build
container-runtime/dist/node_modules/.package-lock.json container-runtime/dist/package.json container-runtime/dist/package-lock.json: container-runtime/package.json container-runtime/package-lock.json sdk/dist/package.json container-runtime/install-dist-deps.sh container-runtime/dist/node_modules/.package-lock.json container-runtime/dist/package.json container-runtime/dist/package-lock.json: container-runtime/package.json container-runtime/package-lock.json sdk/dist/package.json container-runtime/install-dist-deps.sh
./container-runtime/install-dist-deps.sh ./container-runtime/install-dist-deps.sh
touch container-runtime/dist/node_modules/.package-lock.json touch container-runtime/dist/node_modules/.package-lock.json
container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(ARCH)-unknown-linux-musl/release/containerbox container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox
ARCH=$(ARCH) REQUIRES=linux ./build/os-compat/run-compat.sh ./container-runtime/update-image.sh ARCH=$(ARCH) REQUIRES=linux ./build/os-compat/run-compat.sh ./container-runtime/update-image.sh
build/lib/depends build/lib/conflicts: build/dpkg-deps/* build/lib/depends build/lib/conflicts: $(ENVIRONMENT_FILE) $(PLATFORM_FILE) $(shell ls build/dpkg-deps/*)
build/dpkg-deps/generate.sh PLATFORM=$(PLATFORM) ARCH=$(ARCH) build/dpkg-deps/generate.sh
$(FIRMWARE_ROMS): build/lib/firmware.json download-firmware.sh $(PLATFORM_FILE) $(FIRMWARE_ROMS): build/lib/firmware.json download-firmware.sh $(PLATFORM_FILE)
./download-firmware.sh $(PLATFORM) ./download-firmware.sh $(PLATFORM)
system-images/compat/docker-images/$(ARCH).tar: $(COMPAT_SRC) core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE)
cd system-images/compat && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-startbox.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
system-images/utils/docker-images/$(ARCH).tar: $(UTILS_SRC) core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox: $(CORE_SRC) $(ENVIRONMENT_FILE)
cd system-images/utils && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
system-images/binfmt/docker-images/$(ARCH).tar: $(BINFMT_SRC)
cd system-images/binfmt && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
core/target/$(ARCH)-unknown-linux-musl/release/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-startbox.sh
touch core/target/$(ARCH)-unknown-linux-musl/release/startbox
core/target/$(ARCH)-unknown-linux-musl/release/containerbox: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-containerbox.sh ARCH=$(ARCH) ./core/build-containerbox.sh
touch core/target/$(ARCH)-unknown-linux-musl/release/containerbox touch core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox
web/package-lock.json: web/package.json sdk/baseDist/package.json web/package-lock.json: web/package.json sdk/baseDist/package.json
npm --prefix web i npm --prefix web i
@@ -306,8 +344,12 @@ web/dist/raw/install-wizard/index.html: $(WEB_INSTALL_WIZARD_SRC) $(WEB_SHARED_S
npm --prefix web run build:install npm --prefix web run build:install
touch web/dist/raw/install-wizard/index.html touch web/dist/raw/install-wizard/index.html
$(COMPRESSED_WEB_UIS): $(WEB_UIS) $(ENVIRONMENT_FILE) web/dist/raw/start-tunnel/index.html: $(WEB_START_TUNNEL_SRC) $(WEB_SHARED_SRC) web/.angular/.updated
./compress-uis.sh npm --prefix web run build:tunnel
touch web/dist/raw/start-tunnel/index.html
web/dist/static/%/index.html: web/dist/raw/%/index.html
./compress-uis.sh $*
web/config.json: $(GIT_HASH_FILE) web/config-sample.json web/config.json: $(GIT_HASH_FILE) web/config-sample.json
jq '.useMocks = false' web/config-sample.json | jq '.gitHash = "$(shell cat GIT_HASH.txt)"' > web/config.json jq '.useMocks = false' web/config-sample.json | jq '.gitHash = "$(shell cat GIT_HASH.txt)"' > web/config.json
@@ -334,8 +376,11 @@ ui: web/dist/raw/ui
cargo-deps/aarch64-unknown-linux-musl/release/pi-beep: cargo-deps/aarch64-unknown-linux-musl/release/pi-beep:
ARCH=aarch64 ./build-cargo-dep.sh pi-beep ARCH=aarch64 ./build-cargo-dep.sh pi-beep
cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console: cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console:
ARCH=$(ARCH) PREINSTALL="apk add musl-dev pkgconfig" ./build-cargo-dep.sh tokio-console ARCH=$(ARCH) PREINSTALL="apk add musl-dev pkgconfig" ./build-cargo-dep.sh tokio-console
cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs: cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs:
ARCH=$(ARCH) PREINSTALL="apk add fuse3 fuse3-dev fuse3-static musl-dev pkgconfig" ./build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs ARCH=$(ARCH) PREINSTALL="apk add fuse3 fuse3-dev fuse3-static musl-dev pkgconfig" ./build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph:
ARCH=$(ARCH) PREINSTALL="apk add musl-dev pkgconfig" ./build-cargo-dep.sh flamegraph

77
START-TUNNEL.md Normal file
View File

@@ -0,0 +1,77 @@
# StartTunnel
A self-hosted WireGuard VPN optimized for creating VLANs and reverse tunneling to personal servers.
You can think of StartTunnel as "virtual router in the cloud".
Use it for private remote access to self-hosted services running on a personal server, or to expose self-hosted services to the public Internet without revealing the host server's IP address.
## Features
- **Create Subnets**: Each subnet creates a private, virtual local area network (VLAN), similar to the LAN created by a home router.
- **Add Devices**: When you add a device (server, phone, laptop) to a subnet, it receives a LAN IP address on that subnet as well as a unique WireGuard config that must be copied, downloaded, or scanned into the device.
- **Forward Ports**: Forwarding a port creates a "reverse tunnel", exposing a specific port on a specific device to the public Internet.
## Features
- **Create Subnets**: Each subnet creates a private, virtual local area network (VLAN), similar to the LAN created by a home router.
- **Add Devices**: When you add a device (server, phone, laptop) to a subnet, it receives a LAN IP address on that subnet as well as a unique Wireguard config that must be copied, downloaded, or scanned into the device.
- **Forward Ports**: Forwarding a port creates a "reverse tunnel", exposing a specific port on a specific device to the public Internet.
## Installation
1. Rent a low cost VPS. For most use cases, the cheapest option should be enough.
- It must have a dedicated public IP address.
- For compute (CPU), memory (RAM), and storage (disk), choose the minimum spec.
- For transfer (bandwidth), it depends on (1) your use case and (2) your home Internet's _upload_ speed. Even if you intend to serve large files or stream content from your server, there is no reason to pay for speeds that exceed your home Internet's upload speed.
1. Provision the VPS with the latest version of Debian.
1. Access the VPS via SSH.
1. Install StartTunnel:
```sh
TMP_DIR=$(mktemp -d) && (cd $TMP_DIR && wget https://github.com/Start9Labs/start-os/releases/download/v0.4.0-alpha.12/start-tunnel-0.4.0-alpha.12-unknown.dev_$(uname -m).deb && apt-get install -y ./start-tunnel-0.4.0-alpha.12-unknown.dev_$(uname -m).deb) && rm -rf $TMP_DIR && systemctl start start-tunneld && echo "Installation Succeeded"
```
5. [Initialize the web interface](#web-interface) (recommended)
## Updating
```sh
TMP_DIR=$(mktemp -d) && (cd $TMP_DIR && wget https://github.com/Start9Labs/start-os/releases/download/v0.4.0-alpha.12/start-tunnel-0.4.0-alpha.12-unknown.dev_$(uname -m).deb && apt-get install --reinstall -y ./start-tunnel-0.4.0-alpha.12-unknown.dev_$(uname -m).deb) && rm -rf $TMP_DIR && systemctl daemon-reload && systemctl restart start-tunneld && echo "Update Succeeded"
```
## CLI
By default, StartTunnel is managed via the `start-tunnel` command line interface, which is self-documented.
```
start-tunnel --help
```
## Web Interface
If you choose to enable the web interface (recommended in most cases), StartTunnel can be accessed as a website from the browser, or programmatically via API.
1. Initialize the web interface.
start-tunnel web init
1. When prompted, select the IP address at which to host the web interface. In many cases, there will be only one IP address.
1. When prompted, enter the port at which to host the web interface. The default is 8443, and we recommend using it. If you change the default, choose an uncommon port to avoid conflicts.
1. Select whether to autogenerate a self-signed certificate or provide your own certificate and key. If you choose to autogenerate, you will be asked to list all IP addresses and domains for which to sign the certificate. For example, if you intend to access your StartTunnel web UI at a domain, include the domain in the list.
1. You will receive a success message with 3 pieces of information:
- <https://IP:port>: the URL where you can reach your personal web interface.
- Password: an autogenerated password for your interface. If you lose/forget it, you can reset using the CLI.
- Root Certificate Authority: the Root CA of your StartTunnel instance. If not already, trust it in your browser or system keychain.

View File

@@ -1,5 +1,7 @@
#!/bin/bash #!/bin/bash
PROJECT=${PROJECT:-"startos"}
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
PLATFORM="$(if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)" PLATFORM="$(if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)"
@@ -16,4 +18,4 @@ if [ -n "$STARTOS_ENV" ]; then
VERSION_FULL="$VERSION_FULL~${STARTOS_ENV}" VERSION_FULL="$VERSION_FULL~${STARTOS_ENV}"
fi fi
echo -n "startos-${VERSION_FULL}_${PLATFORM}" echo -n "${PROJECT}-${VERSION_FULL}_${PLATFORM}"

View File

@@ -7,9 +7,9 @@ bmon
btrfs-progs btrfs-progs
ca-certificates ca-certificates
cifs-utils cifs-utils
conntrack
cryptsetup cryptsetup
curl curl
dnsutils
dmidecode dmidecode
dnsutils dnsutils
dosfstools dosfstools
@@ -19,6 +19,7 @@ exfatprogs
flashrom flashrom
fuse3 fuse3
grub-common grub-common
grub-efi
htop htop
httpdirfs httpdirfs
iotop iotop
@@ -41,7 +42,6 @@ nvme-cli
nyx nyx
openssh-server openssh-server
podman podman
postgresql
psmisc psmisc
qemu-guest-agent qemu-guest-agent
rfkill rfkill

View File

@@ -5,11 +5,15 @@ set -e
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
IFS="-" read -ra FEATURES <<< "$ENVIRONMENT" IFS="-" read -ra FEATURES <<< "$ENVIRONMENT"
FEATURES+=("${ARCH}")
if [ "$ARCH" != "$PLATFORM" ]; then
FEATURES+=("${PLATFORM}")
fi
feature_file_checker=' feature_file_checker='
/^#/ { next } /^#/ { next }
/^\+ [a-z0-9]+$/ { next } /^\+ [a-z0-9.-]+$/ { next }
/^- [a-z0-9]+$/ { next } /^- [a-z0-9.-]+$/ { next }
{ exit 1 } { exit 1 }
' '
@@ -30,8 +34,8 @@ for type in conflicts depends; do
for feature in ${FEATURES[@]}; do for feature in ${FEATURES[@]}; do
file="$feature.$type" file="$feature.$type"
if [ -f $file ]; then if [ -f $file ]; then
if grep "^- $pkg$" $file; then if grep "^- $pkg$" $file > /dev/null; then
SKIP=1 SKIP=yes
fi fi
fi fi
done done

View File

@@ -0,0 +1,11 @@
- grub-common
- grub-efi
+ parted
+ raspberrypi-net-mods
+ raspberrypi-sys-mods
+ raspi-config
+ raspi-firmware
+ raspi-utils
+ rpi-eeprom
+ rpi-update
+ rpi.gpio-common

View File

@@ -1,2 +1,3 @@
+ gdb + gdb
+ heaptrack + heaptrack
+ linux-perf

View File

@@ -0,0 +1 @@
+ grub-pc-bin

View File

@@ -1,34 +1,123 @@
#!/bin/sh #!/bin/sh
printf "\n"
printf "Welcome to\n"
cat << "ASCII"
███████ parse_essential_db_info() {
█ █ █ DB_DUMP="/tmp/startos_db.json"
█ █ █ █
█ █ █ █
█ █ █ █
█ █ █ █
█ █
███████
_____ __ ___ __ __ if command -v start-cli >/dev/null 2>&1; then
(_ | /\ |__) | / \(_ start-cli db dump > "$DB_DUMP" 2>/dev/null || return 1
__) | / \| \ | \__/__) else
ASCII return 1
printf " v$(cat /usr/lib/startos/VERSION.txt)\n\n" fi
printf " %s (%s %s)\n" "$(uname -o)" "$(uname -r)" "$(uname -m)"
printf " Git Hash: $(cat /usr/lib/startos/GIT_HASH.txt)" if command -v jq >/dev/null 2>&1 && [ -f "$DB_DUMP" ]; then
if [ -n "$(cat /usr/lib/startos/ENVIRONMENT.txt)" ]; then HOSTNAME=$(jq -r '.value.serverInfo.hostname // "unknown"' "$DB_DUMP" 2>/dev/null)
printf " ~ $(cat /usr/lib/startos/ENVIRONMENT.txt)\n" VERSION=$(jq -r '.value.serverInfo.version // "unknown"' "$DB_DUMP" 2>/dev/null)
else RAM_BYTES=$(jq -r '.value.serverInfo.ram // 0' "$DB_DUMP" 2>/dev/null)
printf "\n" WAN_IP=$(jq -r '.value.serverInfo.network.gateways[].ipInfo.wanIp // "unknown"' "$DB_DUMP" 2>/dev/null | head -1)
NTP_SYNCED=$(jq -r '.value.serverInfo.ntpSynced // false' "$DB_DUMP" 2>/dev/null)
if [ "$RAM_BYTES" != "0" ] && [ "$RAM_BYTES" != "null" ]; then
RAM_GB=$(echo "scale=1; $RAM_BYTES / 1073741824" | bc 2>/dev/null || echo "unknown")
else
RAM_GB="unknown"
fi
RUNNING_SERVICES=$(jq -r '[.value.packageData[] | select(.status.main == "running")] | length' "$DB_DUMP" 2>/dev/null)
TOTAL_SERVICES=$(jq -r '.value.packageData | length' "$DB_DUMP" 2>/dev/null)
rm -f "$DB_DUMP"
return 0
else
rm -f "$DB_DUMP" 2>/dev/null
return 1
fi
}
DB_INFO_AVAILABLE=0
if parse_essential_db_info; then
DB_INFO_AVAILABLE=1
fi fi
printf "\n" if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$VERSION" != "unknown" ]; then
printf " * Documentation: https://docs.start9.com\n" version_display="v$VERSION"
printf " * Management: https://%s.local\n" "$(hostname)" else
printf " * Support: https://start9.com/contact\n" version_display="v$(cat /usr/lib/startos/VERSION.txt 2>/dev/null || echo 'unknown')"
printf " * Source Code: https://github.com/Start9Labs/start-os\n" fi
printf " * License: MIT\n"
printf "\n" printf "\n\033[1;37m ▄▄▀▀▀▀▀▄▄\033[0m\n"
printf "\033[1;37m ▄▀ ▄ ▀▄ ▄▄▄▄▄ ▄▄▄▄▄▄▄ ▄ ▄▄▄▄▄ ▄▄▄▄▄▄▄ \033[1;31m▄██████▄ ▄██████\033[0m\n"
printf "\033[1;37m █ █ █ █ █ █ █ █ █ ▀▄ █ \033[1;31m██ ██ ██ \033[0m\n"
printf "\033[1;37m█ █ █ █ ▀▄▄▄▄ █ █ █ █ ▄▄▄▀ █ \033[1;31m██ ██ ▀█████▄\033[0m\n"
printf "\033[1;37m█ █ █ █ █ █ █ █ █ ▀▄ █ \033[1;31m██ ██ ██\033[0m\n"
printf "\033[1;37m █ █ █ █ ▄▄▄▄▄▀ █ █ █ █ ▀▄ █ \033[1;31m▀██████▀ ██████▀\033[0m\n"
printf "\033[1;37m █ █\033[0m\n"
printf "\033[1;37m ▀▀▄▄▄▀▀ $version_display\033[0m\n\n"
uptime_str=$(uptime | awk -F'up ' '{print $2}' | awk -F',' '{print $1}' | sed 's/^ *//')
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$RAM_GB" != "unknown" ]; then
memory_used=$(free -m | awk 'NR==2{printf "%.0fMB", $3}')
memory_display="$memory_used / ${RAM_GB}GB"
else
memory_display=$(free -m | awk 'NR==2{printf "%.0fMB / %.0fMB", $3, $2}')
fi
root_usage=$(df -h / | awk 'NR==2{printf "%s (%s free)", $5, $4}')
if [ -d "/media/startos/data/package-data" ]; then
data_usage=$(df -h /media/startos/data/package-data | awk 'NR==2{printf "%s (%s free)", $5, $4}')
else
data_usage="N/A"
fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ]; then
services_text="$RUNNING_SERVICES/$TOTAL_SERVICES running"
else
services_text="Unknown"
fi
local_ip=$(ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i=="src") print $(i+1)}' | head -1)
if [ -z "$local_ip" ]; then local_ip="N/A"; fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$WAN_IP" != "unknown" ]; then
wan_ip="$WAN_IP"
else
wan_ip="N/A"
fi
printf " \033[1;37m┌─ SYSTEM STATUS ───────────────────────────────────────────────────┐\033[0m\n"
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Uptime:" "$uptime_str" "Memory:" "$memory_display"
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Root:" "$root_usage" "Data:" "$data_usage"
if [ "$DB_INFO_AVAILABLE" -eq 1 ]; then
if [ "$RUNNING_SERVICES" -eq "$TOTAL_SERVICES" ] && [ "$TOTAL_SERVICES" -gt 0 ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;32m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
elif [ "$RUNNING_SERVICES" -gt 0 ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
else
printf " \033[1;37m│\033[0m %-8s \033[0;31m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
fi
else
printf " \033[1;37m│\033[0m %-8s \033[0;37m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$NTP_SYNCED" = "true" ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;32m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Synced"
elif [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$NTP_SYNCED" = "false" ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;31m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Not Synced"
else
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;37m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Unknown"
fi
printf " \033[1;37m└───────────────────────────────────────────────────────────────────┘\033[0m"
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$HOSTNAME" != "unknown" ]; then
web_url="https://$HOSTNAME.local"
else
web_url="https://$(hostname).local"
fi
printf "\n \033[1;37m┌──────────────────────────────────────────────────── QUICK ACCESS ─┐\033[0m\n"
printf " \033[1;37m│\033[0m Web Interface: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "$web_url"
printf " \033[1;37m│\033[0m Documentation: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://staging.docs.start9.com"
printf " \033[1;37m│\033[0m Support: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://start9.com/contact"
printf " \033[1;37m└───────────────────────────────────────────────────────────────────┘\033[0m\n\n"

View File

@@ -10,24 +10,24 @@ fi
POSITIONAL_ARGS=() POSITIONAL_ARGS=()
while [[ $# -gt 0 ]]; do while [[ $# -gt 0 ]]; do
case $1 in case $1 in
--no-sync) --no-sync)
NO_SYNC=1 NO_SYNC=1
shift shift
;; ;;
--create) --create)
ONLY_CREATE=1 ONLY_CREATE=1
shift shift
;; ;;
-*|--*) -*|--*)
echo "Unknown option $1" echo "Unknown option $1"
exit 1 exit 1
;; ;;
*) *)
POSITIONAL_ARGS+=("$1") # save positional arg POSITIONAL_ARGS+=("$1") # save positional arg
shift # past argument shift # past argument
;; ;;
esac esac
done done
set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
@@ -35,7 +35,7 @@ set -- "${POSITIONAL_ARGS[@]}" # restore positional parameters
if [ -z "$NO_SYNC" ]; then if [ -z "$NO_SYNC" ]; then
echo 'Syncing...' echo 'Syncing...'
umount -R /media/startos/next 2> /dev/null umount -R /media/startos/next 2> /dev/null
umount -R /media/startos/upper 2> /dev/null umount /media/startos/upper 2> /dev/null
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next
mkdir /media/startos/upper mkdir /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper mount -t tmpfs tmpfs /media/startos/upper
@@ -43,8 +43,6 @@ if [ -z "$NO_SYNC" ]; then
mount -t overlay \ mount -t overlay \
-olowerdir=/media/startos/current,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \ -olowerdir=/media/startos/current,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next overlay /media/startos/next
mkdir -p /media/startos/next/media/startos/root
mount --bind /media/startos/root /media/startos/next/media/startos/root
fi fi
if [ -n "$ONLY_CREATE" ]; then if [ -n "$ONLY_CREATE" ]; then
@@ -56,12 +54,18 @@ mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot mkdir -p /media/startos/next/boot
mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /sys/firmware/efi/efivars 2> /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi
if [ -z "$*" ]; then if [ -z "$*" ]; then
chroot /media/startos/next chroot /media/startos/next
@@ -71,6 +75,10 @@ else
CHROOT_RES=$? CHROOT_RES=$?
fi fi
if mountpoint /media/startos/next/sys/firmware/efi/efivars 2> /dev/null; then
umount /media/startos/next/sys/firmware/efi/efivars
fi
umount /media/startos/next/run umount /media/startos/next/run
umount /media/startos/next/tmp umount /media/startos/next/tmp
umount /media/startos/next/dev umount /media/startos/next/dev
@@ -87,11 +95,12 @@ if [ "$CHROOT_RES" -eq 0 ]; then
echo 'Upgrading...' echo 'Upgrading...'
rm -f /media/startos/images/next.squashfs
if ! time mksquashfs /media/startos/next /media/startos/images/next.squashfs -b 4096 -comp gzip; then if ! time mksquashfs /media/startos/next /media/startos/images/next.squashfs -b 4096 -comp gzip; then
umount -R /media/startos/next umount -l /media/startos/next
umount -R /media/startos/upper umount -l /media/startos/upper
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next
exit 1 exit 1
fi fi
hash=$(b3sum /media/startos/images/next.squashfs | head -c 32) hash=$(b3sum /media/startos/images/next.squashfs | head -c 32)
mv /media/startos/images/next.squashfs /media/startos/images/${hash}.rootfs mv /media/startos/images/next.squashfs /media/startos/images/${hash}.rootfs
@@ -103,5 +112,5 @@ if [ "$CHROOT_RES" -eq 0 ]; then
fi fi
umount -R /media/startos/next umount -R /media/startos/next
umount -R /media/startos/upper umount /media/startos/upper
rm -rf /media/startos/upper /media/startos/next rm -rf /media/startos/upper /media/startos/next

View File

@@ -64,9 +64,11 @@ user_pref("messaging-system.rsexperimentloader.enabled", false);
user_pref("network.allow-experiments", false); user_pref("network.allow-experiments", false);
user_pref("network.captive-portal-service.enabled", false); user_pref("network.captive-portal-service.enabled", false);
user_pref("network.connectivity-service.enabled", false); user_pref("network.connectivity-service.enabled", false);
user_pref("network.proxy.autoconfig_url", "file:///usr/lib/startos/proxy.pac"); user_pref("network.proxy.socks", "10.0.3.1");
user_pref("network.proxy.socks_port", 9050);
user_pref("network.proxy.socks_version", 5);
user_pref("network.proxy.socks_remote_dns", true); user_pref("network.proxy.socks_remote_dns", true);
user_pref("network.proxy.type", 2); user_pref("network.proxy.type", 1);
user_pref("privacy.resistFingerprinting", true); user_pref("privacy.resistFingerprinting", true);
//Enable letterboxing if we want the window size sent to the server to snap to common resolutions: //Enable letterboxing if we want the window size sent to the server to snap to common resolutions:
//user_pref("privacy.resistFingerprinting.letterboxing", true); //user_pref("privacy.resistFingerprinting.letterboxing", true);

View File

@@ -1,26 +1,29 @@
#!/bin/bash #!/bin/bash
if [ -z "$iiface" ] || [ -z "$oiface" ] || [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$sport" ] || [ -z "$dport" ]; then if [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$sport" ] || [ -z "$dport" ]; then
>&2 echo 'missing required env var' >&2 echo 'missing required env var'
exit 1 exit 1
fi fi
kind="-A" rule_exists() {
iptables -t nat -C "$@" 2>/dev/null
}
apply_rule() {
if [ "$UNDO" = "1" ]; then
if rule_exists "$@"; then
iptables -t nat -D "$@"
fi
else
if ! rule_exists "$@"; then
iptables -t nat -A "$@"
fi
fi
}
apply_rule PREROUTING -p tcp -d $sip --dport $sport -j DNAT --to-destination $dip:$dport
apply_rule OUTPUT -p tcp -d $sip --dport $sport -j DNAT --to-destination $dip:$dport
if [ "$UNDO" = 1 ]; then if [ "$UNDO" = 1 ]; then
kind="-D" conntrack -D -p tcp -d $sip --dport $sport
fi fi
iptables -t nat "$kind" POSTROUTING -o $iiface -j MASQUERADE
iptables -t nat "$kind" PREROUTING -i $iiface -p tcp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" PREROUTING -i $iiface -p udp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" PREROUTING -i $oiface -s $dip/24 -d $sip -p tcp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" PREROUTING -i $oiface -s $dip/24 -d $sip -p udp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" POSTROUTING -o $oiface -s $dip/24 -d $dip/32 -p tcp --dport $dport -j SNAT --to-source $sip:$sport
iptables -t nat "$kind" POSTROUTING -o $oiface -s $dip/24 -d $dip/32 -p udp --dport $dport -j SNAT --to-source $sip:$sport
iptables -t nat "$kind" PREROUTING -i $iiface -s $sip/32 -d $sip -p tcp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" PREROUTING -i $iiface -s $sip/32 -d $sip -p udp --dport $sport -j DNAT --to-destination $dip:$dport
iptables -t nat "$kind" POSTROUTING -o $oiface -s $sip/32 -d $dip/32 -p tcp --dport $dport -j SNAT --to-source $sip:$sport
iptables -t nat "$kind" POSTROUTING -o $oiface -s $sip/32 -d $dip/32 -p udp --dport $dport -j SNAT --to-source $sip:$sport

View File

@@ -83,6 +83,7 @@ local_mount_root()
if [ -d "$image" ]; then if [ -d "$image" ]; then
mount -r --bind $image /lower mount -r --bind $image /lower
elif [ -f "$image" ]; then elif [ -f "$image" ]; then
modprobe loop
modprobe squashfs modprobe squashfs
mount -r $image /lower mount -r $image /lower
else else

82
build/lib/scripts/upgrade Executable file
View File

@@ -0,0 +1,82 @@
#!/bin/bash
set -e
SOURCE_DIR="$(dirname $(realpath "${BASH_SOURCE[0]}"))"
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
exit 1
fi
if ! [ -f "$1" ]; then
>&2 echo "usage: $0 <SQUASHFS>"
exit 1
fi
echo 'Upgrading...'
hash=$(b3sum $1 | head -c 32)
if [ -n "$2" ] && [ "$hash" != "$CHECKSUM" ]; then
>&2 echo 'Checksum mismatch'
exit 2
fi
unsquashfs -f -d / $1 boot
umount -R /media/startos/next 2> /dev/null || true
umount /media/startos/upper 2> /dev/null || true
umount /media/startos/lower 2> /dev/null || true
mkdir -p /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper
mkdir -p /media/startos/lower /media/startos/upper/data /media/startos/upper/work /media/startos/next
mount $1 /media/startos/lower
mount -t overlay \
-olowerdir=/media/startos/lower,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next
mkdir -p /media/startos/next/run
mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot
mkdir -p /media/startos/next/media/startos/root
mount --bind /run /media/startos/next/run
mount --bind /tmp /media/startos/next/tmp
mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot
mount --bind /media/startos/root /media/startos/next/media/startos/root
if mountpoint /boot/efi 2> /dev/null; then
mkdir -p /media/startos/next/boot/efi
mount --bind /boot/efi /media/startos/next/boot/efi
fi
if mountpoint /sys/firmware/efi/efivars 2> /dev/null; then
mount --bind /sys/firmware/efi/efivars /media/startos/next/sys/firmware/efi/efivars
fi
chroot /media/startos/next bash -e << "EOF"
if dpkg -s grub-common 2>&1 > /dev/null; then
grub-install /dev/$(eval $(lsblk -o MOUNTPOINT,PKNAME -P | grep 'MOUNTPOINT="/media/startos/root"') && echo $PKNAME)
update-grub
fi
EOF
sync
umount -R /media/startos/next
umount /media/startos/upper
umount /media/startos/lower
mv $1 /media/startos/images/${hash}.rootfs
ln -rsf /media/startos/images/${hash}.rootfs /media/startos/config/current.rootfs
sync
echo 'System upgrade complete. Reboot to apply changes...'

View File

@@ -1,61 +0,0 @@
#!/bin/bash
set -e
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
exit 1
fi
if [ -z "$1" ]; then
>&2 echo "usage: $0 <SQUASHFS>"
exit 1
fi
VERSION=$(unsquashfs -cat $1 /usr/lib/startos/VERSION.txt)
GIT_HASH=$(unsquashfs -cat $1 /usr/lib/startos/GIT_HASH.txt)
B3SUM=$(b3sum $1 | head -c 32)
if [ -n "$CHECKSUM" ] && [ "$CHECKSUM" != "$B3SUM" ]; then
>&2 echo "CHECKSUM MISMATCH"
exit 2
fi
mv $1 /media/startos/images/${B3SUM}.rootfs
ln -rsf /media/startos/images/${B3SUM}.rootfs /media/startos/config/current.rootfs
unsquashfs -n -f -d / /media/startos/images/${B3SUM}.rootfs boot
umount -R /media/startos/next 2> /dev/null || true
umount -R /media/startos/lower 2> /dev/null || true
umount -R /media/startos/upper 2> /dev/null || true
rm -rf /media/startos/lower /media/startos/upper /media/startos/next
mkdir /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper
mkdir -p /media/startos/lower /media/startos/upper/data /media/startos/upper/work /media/startos/next
mount /media/startos/images/${B3SUM}.rootfs /media/startos/lower
mount -t overlay \
-olowerdir=/media/startos/lower,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next
mkdir -p /media/startos/next/media/startos/root
mount --bind /media/startos/root /media/startos/next/media/startos/root
mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot
mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot
chroot /media/startos/next update-grub2
umount -R /media/startos/next
umount -R /media/startos/upper
umount -R /media/startos/lower
rm -rf /media/startos/lower /media/startos/upper /media/startos/next
sync
reboot

View File

@@ -18,7 +18,7 @@ if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" !=
docker run -d --rm --name os-compat --privileged --security-opt apparmor=unconfined -v "${project_pwd}:/root/start-os" -v /lib/modules:/lib/modules:ro start9/build-env docker run -d --rm --name os-compat --privileged --security-opt apparmor=unconfined -v "${project_pwd}:/root/start-os" -v /lib/modules:/lib/modules:ro start9/build-env
while ! docker exec os-compat systemctl is-active --quiet multi-user.target 2> /dev/null; do sleep .5; done while ! docker exec os-compat systemctl is-active --quiet multi-user.target 2> /dev/null; do sleep .5; done
docker exec -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH $USE_TTY -w "/root/start-os${rel_pwd}" os-compat $@ docker exec -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH -ePROJECT -eDEPENDS -eCONFLICTS $USE_TTY -w "/root/start-os${rel_pwd}" os-compat $@
code=$? code=$?
docker stop os-compat docker stop os-compat
exit $code exit $code

View File

@@ -4,13 +4,17 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -e set -e
rm -rf web/dist/static STATIC_DIR=web/dist/static/$1
RAW_DIR=web/dist/raw/$1
mkdir -p $STATIC_DIR
rm -rf $STATIC_DIR
if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf
find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf
for file in $(find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br'); do for file in $(find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br'); do
raw_size=$(du $file | awk '{print $1 * 512}') raw_size=$(du $file | awk '{print $1 * 512}')
gz_size=$(du $file.gz | awk '{print $1 * 512}') gz_size=$(du $file.gz | awk '{print $1 * 512}')
br_size=$(du $file.br | awk '{print $1 * 512}') br_size=$(du $file.br | awk '{print $1 * 512}')
@@ -23,4 +27,5 @@ if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
done done
fi fi
cp -r web/dist/raw web/dist/static
cp -r $RAW_DIR $STATIC_DIR

View File

@@ -3,4 +3,4 @@ Description=StartOS Container Runtime Failure Handler
[Service] [Service]
Type=oneshot Type=oneshot
ExecStart=/usr/bin/start-cli rebuild ExecStart=/usr/bin/start-container rebuild

View File

@@ -6,13 +6,9 @@ mkdir -p /run/systemd/resolve
echo "nameserver 8.8.8.8" > /run/systemd/resolve/stub-resolv.conf echo "nameserver 8.8.8.8" > /run/systemd/resolve/stub-resolv.conf
apt-get update apt-get update
apt-get install -y curl rsync qemu-user-static apt-get install -y curl rsync qemu-user-static nodejs
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install 22
ln -s $(which node) /usr/bin/node
sed -i '/\(^\|#\)DNSStubListener=/c\DNSStubListener=no' /etc/systemd/resolved.conf
sed -i '/\(^\|#\)Storage=/c\Storage=persistent' /etc/systemd/journald.conf sed -i '/\(^\|#\)Storage=/c\Storage=persistent' /etc/systemd/journald.conf
sed -i '/\(^\|#\)Compress=/c\Compress=yes' /etc/systemd/journald.conf sed -i '/\(^\|#\)Compress=/c\Compress=yes' /etc/systemd/journald.conf
sed -i '/\(^\|#\)SystemMaxUse=/c\SystemMaxUse=1G' /etc/systemd/journald.conf sed -i '/\(^\|#\)SystemMaxUse=/c\SystemMaxUse=1G' /etc/systemd/journald.conf
@@ -20,4 +16,7 @@ sed -i '/\(^\|#\)ForwardToSyslog=/c\ForwardToSyslog=no' /etc/systemd/journald.co
systemctl enable container-runtime.service systemctl enable container-runtime.service
rm -rf /run/systemd rm -rf /run/systemd
rm -f /etc/resolv.conf
echo "nameserver 10.0.3.1" > /etc/resolv.conf

View File

@@ -3,7 +3,7 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -e set -e
DISTRO=debian DISTRO=debian
VERSION=bookworm VERSION=trixie
ARCH=${ARCH:-$(uname -m)} ARCH=${ARCH:-$(uname -m)}
FLAVOR=default FLAVOR=default

View File

@@ -38,7 +38,7 @@
}, },
"../sdk/dist": { "../sdk/dist": {
"name": "@start9labs/start-sdk", "name": "@start9labs/start-sdk",
"version": "0.4.0-beta.36", "version": "0.4.0-beta.43",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@iarna/toml": "^3.0.0", "@iarna/toml": "^3.0.0",
@@ -110,6 +110,7 @@
"integrity": "sha512-l+lkXCHS6tQEc5oUpK28xBOZ6+HwaH7YwoYQbLFiYb4nS2/l1tKnZEtEWkD0GuiYdvArf9qBS0XlQGXzPMsNqQ==", "integrity": "sha512-l+lkXCHS6tQEc5oUpK28xBOZ6+HwaH7YwoYQbLFiYb4nS2/l1tKnZEtEWkD0GuiYdvArf9qBS0XlQGXzPMsNqQ==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"@ampproject/remapping": "^2.2.0", "@ampproject/remapping": "^2.2.0",
"@babel/code-frame": "^7.26.2", "@babel/code-frame": "^7.26.2",
@@ -1200,6 +1201,7 @@
"dev": true, "dev": true,
"hasInstallScript": true, "hasInstallScript": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"peer": true,
"dependencies": { "dependencies": {
"@swc/counter": "^0.1.3", "@swc/counter": "^0.1.3",
"@swc/types": "^0.1.17" "@swc/types": "^0.1.17"
@@ -2143,6 +2145,7 @@
} }
], ],
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"caniuse-lite": "^1.0.30001688", "caniuse-lite": "^1.0.30001688",
"electron-to-chromium": "^1.5.73", "electron-to-chromium": "^1.5.73",
@@ -3990,6 +3993,7 @@
"integrity": "sha512-NIy3oAFp9shda19hy4HK0HRTWKtPJmGdnvywu01nOqNC2vZg+Z+fvJDxpMQA88eb2I9EcafcdjYgsDthnYTvGw==", "integrity": "sha512-NIy3oAFp9shda19hy4HK0HRTWKtPJmGdnvywu01nOqNC2vZg+Z+fvJDxpMQA88eb2I9EcafcdjYgsDthnYTvGw==",
"dev": true, "dev": true,
"license": "MIT", "license": "MIT",
"peer": true,
"dependencies": { "dependencies": {
"@jest/core": "^29.7.0", "@jest/core": "^29.7.0",
"@jest/types": "^29.6.3", "@jest/types": "^29.6.3",
@@ -6556,6 +6560,7 @@
"integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==", "integrity": "sha512-84MVSjMEHP+FQRPy3pX9sTVV/INIex71s9TL2Gm5FG/WG1SqXeKyZ0k7/blY/4FdOzI12CBy1vGc4og/eus0fw==",
"dev": true, "dev": true,
"license": "Apache-2.0", "license": "Apache-2.0",
"peer": true,
"bin": { "bin": {
"tsc": "bin/tsc", "tsc": "bin/tsc",
"tsserver": "bin/tsserver" "tsserver": "bin/tsserver"

View File

@@ -35,13 +35,13 @@ const SOCKET_PATH = "/media/startos/rpc/host.sock"
let hostSystemId = 0 let hostSystemId = 0
export type EffectContext = { export type EffectContext = {
procedureId: string | null eventId: string | null
callbacks?: CallbackHolder callbacks?: CallbackHolder
constRetry?: () => void constRetry?: () => void
} }
const rpcRoundFor = const rpcRoundFor =
(procedureId: string | null) => (eventId: string | null) =>
<K extends T.EffectMethod | "clearCallbacks">( <K extends T.EffectMethod | "clearCallbacks">(
method: K, method: K,
params: Record<string, unknown>, params: Record<string, unknown>,
@@ -52,7 +52,7 @@ const rpcRoundFor =
JSON.stringify({ JSON.stringify({
id, id,
method, method,
params: { ...params, procedureId: procedureId || undefined }, params: { ...params, eventId: eventId ?? undefined },
}) + "\n", }) + "\n",
) )
}) })
@@ -103,8 +103,9 @@ const rpcRoundFor =
} }
export function makeEffects(context: EffectContext): Effects { export function makeEffects(context: EffectContext): Effects {
const rpcRound = rpcRoundFor(context.procedureId) const rpcRound = rpcRoundFor(context.eventId)
const self: Effects = { const self: Effects = {
eventId: context.eventId,
child: (name) => child: (name) =>
makeEffects({ ...context, callbacks: context.callbacks?.child(name) }), makeEffects({ ...context, callbacks: context.callbacks?.child(name) }),
constRetry: context.constRetry, constRetry: context.constRetry,

View File

@@ -242,11 +242,11 @@ export class RpcListener {
.when(runType, async ({ id, params }) => { .when(runType, async ({ id, params }) => {
const system = this.system const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure) const procedure = jsonPath.unsafeCast(params.procedure)
const { input, timeout, id: procedureId } = params const { input, timeout, id: eventId } = params
const result = this.getResult( const result = this.getResult(
procedure, procedure,
system, system,
procedureId, eventId,
timeout, timeout,
input, input,
) )
@@ -256,11 +256,11 @@ export class RpcListener {
.when(sandboxRunType, async ({ id, params }) => { .when(sandboxRunType, async ({ id, params }) => {
const system = this.system const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure) const procedure = jsonPath.unsafeCast(params.procedure)
const { input, timeout, id: procedureId } = params const { input, timeout, id: eventId } = params
const result = this.getResult( const result = this.getResult(
procedure, procedure,
system, system,
procedureId, eventId,
timeout, timeout,
input, input,
) )
@@ -275,7 +275,7 @@ export class RpcListener {
const callbacks = const callbacks =
this.callbacks?.getChild("main") || this.callbacks?.child("main") this.callbacks?.getChild("main") || this.callbacks?.child("main")
const effects = makeEffects({ const effects = makeEffects({
procedureId: null, eventId: null,
callbacks, callbacks,
}) })
return handleRpc( return handleRpc(
@@ -304,7 +304,7 @@ export class RpcListener {
} }
await this._system.exit( await this._system.exit(
makeEffects({ makeEffects({
procedureId: params.id, eventId: params.id,
}), }),
target, target,
) )
@@ -320,14 +320,14 @@ export class RpcListener {
const system = await this.getDependencies.system() const system = await this.getDependencies.system()
this.callbacks = new CallbackHolder( this.callbacks = new CallbackHolder(
makeEffects({ makeEffects({
procedureId: params.id, eventId: params.id,
}), }),
) )
const callbacks = this.callbacks.child("init") const callbacks = this.callbacks.child("init")
console.error("Initializing...") console.error("Initializing...")
await system.init( await system.init(
makeEffects({ makeEffects({
procedureId: params.id, eventId: params.id,
callbacks, callbacks,
}), }),
params.kind, params.kind,
@@ -399,7 +399,7 @@ export class RpcListener {
private getResult( private getResult(
procedure: typeof jsonPath._TYPE, procedure: typeof jsonPath._TYPE,
system: System, system: System,
procedureId: string, eventId: string,
timeout: number | null | undefined, timeout: number | null | undefined,
input: any, input: any,
) { ) {
@@ -410,7 +410,7 @@ export class RpcListener {
} }
const callbacks = this.callbacks?.child(procedure) const callbacks = this.callbacks?.child(procedure)
const effects = makeEffects({ const effects = makeEffects({
procedureId, eventId,
callbacks, callbacks,
}) })

View File

@@ -509,13 +509,18 @@ export class SystemForEmbassy implements System {
): Promise<T.ActionInput | null> { ): Promise<T.ActionInput | null> {
if (actionId === "config") { if (actionId === "config") {
const config = await this.getConfig(effects, timeoutMs) const config = await this.getConfig(effects, timeoutMs)
return { spec: config.spec, value: config.config } return {
eventId: effects.eventId!,
spec: config.spec,
value: config.config,
}
} else if (actionId === "properties") { } else if (actionId === "properties") {
return null return null
} else { } else {
const oldSpec = this.manifest.actions?.[actionId]?.["input-spec"] const oldSpec = this.manifest.actions?.[actionId]?.["input-spec"]
if (!oldSpec) return null if (!oldSpec) return null
return { return {
eventId: effects.eventId!,
spec: transformConfigSpec(oldSpec as OldConfigSpec), spec: transformConfigSpec(oldSpec as OldConfigSpec),
value: null, value: null,
} }
@@ -1233,14 +1238,14 @@ async function updateConfig(
const url: string = const url: string =
filled === null || filled.addressInfo === null filled === null || filled.addressInfo === null
? "" ? ""
: catchFn(() => : catchFn(
utils.hostnameInfoToAddress( () =>
specValue.target === "lan-address" (specValue.target === "lan-address"
? filled.addressInfo!.localHostnames[0] || ? filled.addressInfo!.localHostnames[0] ||
filled.addressInfo!.onionHostnames[0] filled.addressInfo!.onionHostnames[0]
: filled.addressInfo!.onionHostnames[0] || : filled.addressInfo!.onionHostnames[0] ||
filled.addressInfo!.localHostnames[0], filled.addressInfo!.localHostnames[0]
), ).hostname.value,
) || "" ) || ""
mutConfigValue[key] = url mutConfigValue[key] = url
} }

View File

@@ -4,7 +4,12 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -e set -e
if mountpoint -q tmp/combined; then sudo umount -R tmp/combined; fi RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
if mountpoint -q tmp/combined; then sudo umount -l tmp/combined; fi
if mountpoint -q tmp/lower; then sudo umount tmp/lower; fi if mountpoint -q tmp/lower; then sudo umount tmp/lower; fi
sudo rm -rf tmp sudo rm -rf tmp
mkdir -p tmp/lower tmp/upper tmp/work tmp/combined mkdir -p tmp/lower tmp/upper tmp/work tmp/combined
@@ -39,8 +44,10 @@ sudo cp container-runtime.service tmp/combined/lib/systemd/system/container-runt
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime.service sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime.service
sudo cp container-runtime-failure.service tmp/combined/lib/systemd/system/container-runtime-failure.service sudo cp container-runtime-failure.service tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime-failure.service sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo cp ../core/target/$ARCH-unknown-linux-musl/release/containerbox tmp/combined/usr/bin/start-cli sudo cp ../core/target/${RUST_ARCH}-unknown-linux-musl/release/containerbox tmp/combined/usr/bin/start-container
sudo chown 0:0 tmp/combined/usr/bin/start-cli echo -e '#!/bin/bash\nexec start-container "$@"' | sudo tee tmp/combined/usr/bin/start-cli # TODO: remove
sudo chmod +x tmp/combined/usr/bin/start-cli
sudo chown 0:0 tmp/combined/usr/bin/start-container
echo container-runtime | sha256sum | head -c 32 | cat - <(echo) | sudo tee tmp/combined/etc/machine-id echo container-runtime | sha256sum | head -c 32 | cat - <(echo) | sudo tee tmp/combined/etc/machine-id
cat deb-install.sh | sudo systemd-nspawn --console=pipe -D tmp/combined $QEMU /bin/bash cat deb-install.sh | sudo systemd-nspawn --console=pipe -D tmp/combined $QEMU /bin/bash
sudo truncate -s 0 tmp/combined/etc/machine-id sudo truncate -s 0 tmp/combined/etc/machine-id

6107
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

2
core/Cross.toml Normal file
View File

@@ -0,0 +1,2 @@
[build]
pre-build = ["apt-get update && apt-get install -y rsync"]

View File

@@ -2,53 +2,70 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
if [ -z "$ARCH" ]; then PROFILE=${PROFILE:-release}
ARCH=$(uname -m) if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknonw profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi fi
if [ -z "${ARCH:-}" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64" ARCH="aarch64"
fi fi
if [ -z "$KERNEL_NAME" ]; then RUST_ARCH="$ARCH"
KERNEL_NAME=$(uname -s) if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi fi
if [ -z "$TARGET" ]; then if [ -z "${KERNEL_NAME:-}" ]; then
if [ "$KERNEL_NAME" = "Linux" ]; then KERNEL_NAME=$(uname -s)
TARGET="$ARCH-unknown-linux-musl"
elif [ "$KERNEL_NAME" = "Darwin" ]; then
TARGET="$ARCH-apple-darwin"
else
>&2 echo "unknown kernel $KERNEL_NAME"
exit 1
fi
fi fi
USE_TTY= if [ -z "${TARGET:-}" ]; then
if tty -s; then if [ "$KERNEL_NAME" = "Linux" ]; then
USE_TTY="-it" TARGET="$RUST_ARCH-unknown-linux-musl"
elif [ "$KERNEL_NAME" = "Darwin" ]; then
TARGET="$RUST_ARCH-apple-darwin"
else
>&2 echo "unknown kernel $KERNEL_NAME"
exit 1
fi
fi fi
cd .. cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then # Ensure GIT_HASH.txt exists if not created by higher-level build steps
RUSTFLAGS="--cfg tokio_unstable" if [ ! -f GIT_HASH.txt ] && command -v git >/dev/null 2>&1; then
git rev-parse HEAD > GIT_HASH.txt || true
fi fi
if which zig > /dev/null && [ "$ENFORCE_USE_DOCKER" != 1 ]; do FEATURES="$(echo "${ENVIRONMENT:-}" | sed 's/-/,/g')"
echo "FEATURES=\"$FEATURES\"" FEATURE_ARGS="cli"
echo "RUSTFLAGS=\"$RUSTFLAGS\"" if [ -n "$FEATURES" ]; then
RUSTFLAGS=$RUSTFLAGS sh -c "cd core && cargo zigbuild --release --no-default-features --features cli,$FEATURES --locked --bin start-cli --target=$TARGET" FEATURE_ARGS="$FEATURE_ARGS,$FEATURES"
else fi
alias 'rust-zig-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/cargo-zigbuild'
RUSTFLAGS=$RUSTFLAGS rust-zig-builder sh -c "cd core && cargo zigbuild --release --no-default-features --features cli,$FEATURES --locked --bin start-cli --target=$TARGET"
if [ "$(ls -nd core/target/$TARGET/release/start-cli | awk '{ print $3 }')" != "$UID" ]; then RUSTFLAGS=""
rust-zig-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo" if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then
fi RUSTFLAGS="--cfg tokio_unstable"
fi
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features $FEATURE_ARGS --locked --bin start-cli --target=$TARGET
if [ "$(ls -nd "core/target/$TARGET/$PROFILE/start-cli" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi fi

View File

@@ -2,9 +2,21 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknonw profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
ARCH=$(uname -m) ARCH=$(uname -m)
fi fi
@@ -13,24 +25,22 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64" ARCH="aarch64"
fi fi
USE_TTY= RUST_ARCH="$ARCH"
if tty -s; then if [ "$ARCH" = "riscv64" ]; then
USE_TTY="-it" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features container-runtime,$FEATURES --locked --bin containerbox --target=$ARCH-unknown-linux-musl" rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-container,$FEATURES --locked --bin containerbox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/containerbox | awk '{ print $3 }')" != "$UID" ]; then if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/containerbox" | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo" rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /root/.cargo"
fi fi

View File

@@ -2,9 +2,21 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknonw profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
ARCH=$(uname -m) ARCH=$(uname -m)
fi fi
@@ -13,24 +25,22 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64" ARCH="aarch64"
fi fi
USE_TTY= RUST_ARCH="$ARCH"
if tty -s; then if [ "$ARCH" = "riscv64" ]; then
USE_TTY="-it" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features cli,registry,$FEATURES --locked --bin registrybox --target=$ARCH-unknown-linux-musl" rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-registry,registry,$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/registrybox | awk '{ print $3 }')" != "$UID" ]; then if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/registrybox" | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo" rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /root/.cargo"
fi fi

View File

@@ -2,9 +2,21 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknonw profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
ARCH=$(uname -m) ARCH=$(uname -m)
fi fi
@@ -13,24 +25,22 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64" ARCH="aarch64"
fi fi
USE_TTY= RUST_ARCH="$ARCH"
if tty -s; then if [ "$ARCH" = "riscv64" ]; then
USE_TTY="-it" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features cli,daemon,$FEATURES --locked --bin startbox --target=$ARCH-unknown-linux-musl" rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli,startd,$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/startbox | awk '{ print $3 }')" != "$UID" ]; then if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/startbox" | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo" rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /root/.cargo"
fi fi

View File

@@ -2,35 +2,43 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknonw profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
ARCH=$(uname -m) ARCH=$(uname -m)
fi fi
if [ "$ARCH" = "arm64" ]; then if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64" ARCH="aarch64"
fi fi
USE_TTY= RUST_ARCH="$ARCH"
if tty -s; then if [ "$ARCH" = "riscv64" ]; then
USE_TTY="-it" RUST_ARCH="riscv64gc"
fi fi
cd .. cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo test --release --features=test,$FEATURES 'export_bindings_' && chown \$UID:\$UID startos/bindings" rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features test,$FEATURES --locked 'export_bindings_'
if [ "$(ls -nd core/startos/bindings | awk '{ print $3 }')" != "$UID" ]; then if [ "$(ls -nd "core/startos/bindings" | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID startos/bindings && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo" rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID core/startos/bindings && chown -R $UID:$UID /root/.cargo"
fi fi

46
core/build-tunnelbox.sh Executable file
View File

@@ -0,0 +1,46 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknonw profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-zig-builder cargo zigbuild --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-tunnel,tunnel,$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl
if [ "$(ls -nd "core/target/$RUST_ARCH-unknown-linux-musl/$PROFILE/tunnelbox" | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /root/.cargo"
fi

8
core/builder-alias.sh Normal file
View File

@@ -0,0 +1,8 @@
#!/bin/bash
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-zig-builder'='docker run '"$USE_TTY"' --rm -e "RUSTFLAGS=$RUSTFLAGS" -e "CFLAGS=-D_FORTIFY_SOURCE=2" -e "CXXFLAGS=-D_FORTIFY_SOURCE=2" -e SCCACHE_GHA_ENABLED -e SCCACHE_GHA_VERSION -e ACTIONS_RESULTS_URL -e ACTIONS_RUNTIME_TOKEN -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$HOME/.cache/sccache":/root/.cache/sccache -v "$(pwd)":/workdir -w /workdir -P start9/cargo-zigbuild'

View File

@@ -16,4 +16,4 @@ if [ "$PLATFORM" = "arm64" ]; then
PLATFORM="aarch64" PLATFORM="aarch64"
fi fi
cargo install --path=./startos --no-default-features --features=cli,docker,registry --bin start-cli --locked cargo install --path=./startos --no-default-features --features=cli,docker --bin start-cli --locked

View File

@@ -1,22 +1,28 @@
[package] [package]
edition = "2021"
name = "models" name = "models"
version = "0.1.0" version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features]
arti = ["arti-client"]
[dependencies] [dependencies]
arti-client = { version = "0.33", default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
axum = "0.8.4" axum = "0.8.4"
base64 = "0.22.1" base64 = "0.22.1"
color-eyre = "0.6.2" color-eyre = "0.6.2"
ed25519-dalek = { version = "2.0.0", features = ["serde"] } ed25519-dalek = { version = "2.0.0", features = ["serde"] }
gpt = "4.1.0"
lazy_static = "1.4"
mbrman = "0.6.0"
exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [ exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [
"serde", "serde",
] } ] }
gpt = "4.1.0"
ipnet = "2.8.0" ipnet = "2.8.0"
lazy_static = "1.4"
lettre = { version = "0.11", default-features = false }
mbrman = "0.6.0"
miette = "7.6.0"
num_enum = "0.7.1" num_enum = "0.7.1"
openssl = { version = "0.10.57", features = ["vendored"] } openssl = { version = "0.10.57", features = ["vendored"] }
patch-db = { version = "*", path = "../../patch-db/patch-db", features = [ patch-db = { version = "*", path = "../../patch-db/patch-db", features = [
@@ -29,16 +35,12 @@ rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch =
rustls = "0.23" rustls = "0.23"
serde = { version = "1.0", features = ["derive", "rc"] } serde = { version = "1.0", features = ["derive", "rc"] }
serde_json = "1.0" serde_json = "1.0"
sqlx = { version = "0.8.6", features = [
"chrono",
"runtime-tokio-rustls",
"postgres",
] }
ssh-key = "0.6.2" ssh-key = "0.6.2"
ts-rs = { git = "https://github.com/dr-bonez/ts-rs.git", branch = "feature/top-level-as" } # "8"
thiserror = "2.0" thiserror = "2.0"
tokio = { version = "1", features = ["full"] } tokio = { version = "1", features = ["full"] }
torut = { git = "https://github.com/Start9Labs/torut.git", branch = "update/dependencies" } torut = "0.2.1"
tracing = "0.1.39" tracing = "0.1.39"
yasi = "0.1.5" ts-rs = "9"
typeid = "1"
yasi = { version = "0.1.6", features = ["serde", "ts-rs"] }
zbus = "5" zbus = "5"

View File

@@ -1,3 +1,3 @@
// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually. // This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs). Do not edit this file manually.
export type ServiceInterfaceId = string; export type ServiceInterfaceId = string;

View File

@@ -1,5 +1,6 @@
use std::borrow::Cow; use std::borrow::Cow;
use std::path::Path; use std::path::Path;
use std::str::FromStr;
use base64::Engine; use base64::Engine;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
@@ -14,28 +15,26 @@ use crate::{mime, Error, ErrorKind, ResultExt};
#[derive(Clone, TS)] #[derive(Clone, TS)]
#[ts(type = "string")] #[ts(type = "string")]
pub struct DataUrl<'a> { pub struct DataUrl<'a> {
mime: InternedString, pub mime: InternedString,
data: Cow<'a, [u8]>, pub data: Cow<'a, [u8]>,
} }
impl<'a> DataUrl<'a> { impl<'a> DataUrl<'a> {
pub const DEFAULT_MIME: &'static str = "application/octet-stream"; pub const DEFAULT_MIME: &'static str = "application/octet-stream";
pub const MAX_SIZE: u64 = 100 * 1024; pub const MAX_SIZE: u64 = 100 * 1024;
// data:{mime};base64,{data} fn to_string(&self) -> String {
pub fn to_string(&self) -> String {
use std::fmt::Write; use std::fmt::Write;
let mut res = String::with_capacity(self.data_url_len_without_mime() + self.mime.len()); let mut res = String::with_capacity(self.len());
let _ = write!(res, "data:{};base64,", self.mime); write!(&mut res, "{self}").unwrap();
base64::engine::general_purpose::STANDARD.encode_string(&self.data, &mut res);
res res
} }
fn data_url_len_without_mime(&self) -> usize { fn len_without_mime(&self) -> usize {
5 + 8 + (4 * self.data.len() / 3) + 3 5 + 8 + (4 * self.data.len() / 3) + 3
} }
pub fn data_url_len(&self) -> usize { pub fn len(&self) -> usize {
self.data_url_len_without_mime() + self.mime.len() self.len_without_mime() + self.mime.len()
} }
pub fn from_slice(mime: &str, data: &'a [u8]) -> Self { pub fn from_slice(mime: &str, data: &'a [u8]) -> Self {
@@ -44,6 +43,10 @@ impl<'a> DataUrl<'a> {
data: Cow::Borrowed(data), data: Cow::Borrowed(data),
} }
} }
pub fn canonical_ext(&self) -> Option<&'static str> {
mime::unmime(&self.mime)
}
} }
impl DataUrl<'static> { impl DataUrl<'static> {
pub async fn from_reader( pub async fn from_reader(
@@ -109,12 +112,57 @@ impl DataUrl<'static> {
} }
} }
impl<'a> std::fmt::Display for DataUrl<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"data:{};base64,{}",
self.mime,
base64::display::Base64Display::new(
&*self.data,
&base64::engine::general_purpose::STANDARD
)
)
}
}
impl<'a> std::fmt::Debug for DataUrl<'a> { impl<'a> std::fmt::Debug for DataUrl<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(&self.to_string()) std::fmt::Display::fmt(self, f)
} }
} }
#[derive(Debug)]
pub struct DataUrlParseError;
impl std::fmt::Display for DataUrlParseError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "invalid base64 url")
}
}
impl std::error::Error for DataUrlParseError {}
impl From<DataUrlParseError> for Error {
fn from(e: DataUrlParseError) -> Self {
Error::new(e, ErrorKind::ParseUrl)
}
}
impl FromStr for DataUrl<'static> {
type Err = DataUrlParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
s.strip_prefix("data:")
.and_then(|v| v.split_once(";base64,"))
.and_then(|(mime, data)| {
Some(DataUrl {
mime: InternedString::intern(mime),
data: Cow::Owned(
base64::engine::general_purpose::STANDARD
.decode(data)
.ok()?,
),
})
})
.ok_or(DataUrlParseError)
}
}
impl<'de> Deserialize<'de> for DataUrl<'static> { impl<'de> Deserialize<'de> for DataUrl<'static> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where where
@@ -130,21 +178,9 @@ impl<'de> Deserialize<'de> for DataUrl<'static> {
where where
E: serde::de::Error, E: serde::de::Error,
{ {
v.strip_prefix("data:") v.parse().map_err(|_| {
.and_then(|v| v.split_once(";base64,")) E::invalid_value(serde::de::Unexpected::Str(v), &"a valid base64 data url")
.and_then(|(mime, data)| { })
Some(DataUrl {
mime: InternedString::intern(mime),
data: Cow::Owned(
base64::engine::general_purpose::STANDARD
.decode(data)
.ok()?,
),
})
})
.ok_or_else(|| {
E::invalid_value(serde::de::Unexpected::Str(v), &"a valid base64 data url")
})
} }
} }
deserializer.deserialize_any(Visitor) deserializer.deserialize_any(Visitor)
@@ -168,6 +204,6 @@ fn doesnt_reallocate() {
mime: InternedString::intern("png"), mime: InternedString::intern("png"),
data: Cow::Borrowed(&random[..i]), data: Cow::Borrowed(&random[..i]),
}; };
assert_eq!(icon.to_string().capacity(), icon.data_url_len()); assert_eq!(icon.to_string().capacity(), icon.len());
} }
} }

View File

@@ -94,6 +94,7 @@ pub enum ErrorKind {
DBus = 75, DBus = 75,
InstallFailed = 76, InstallFailed = 76,
UpdateFailed = 77, UpdateFailed = 77,
Smtp = 78,
} }
impl ErrorKind { impl ErrorKind {
pub fn as_str(&self) -> &'static str { pub fn as_str(&self) -> &'static str {
@@ -176,6 +177,7 @@ impl ErrorKind {
DBus => "DBus Error", DBus => "DBus Error",
InstallFailed => "Install Failed", InstallFailed => "Install Failed",
UpdateFailed => "Update Failed", UpdateFailed => "Update Failed",
Smtp => "SMTP Error",
} }
} }
} }
@@ -185,9 +187,9 @@ impl Display for ErrorKind {
} }
} }
#[derive(Debug)]
pub struct Error { pub struct Error {
pub source: color_eyre::eyre::Error, pub source: color_eyre::eyre::Error,
pub debug: Option<color_eyre::eyre::Error>,
pub kind: ErrorKind, pub kind: ErrorKind,
pub revision: Option<Revision>, pub revision: Option<Revision>,
pub task: Option<JoinHandle<()>>, pub task: Option<JoinHandle<()>>,
@@ -195,13 +197,29 @@ pub struct Error {
impl Display for Error { impl Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}: {}", self.kind.as_str(), self.source) write!(f, "{}: {:#}", self.kind.as_str(), self.source)
}
}
impl Debug for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}: {:?}",
self.kind.as_str(),
self.debug.as_ref().unwrap_or(&self.source)
)
} }
} }
impl Error { impl Error {
pub fn new<E: Into<color_eyre::eyre::Error>>(source: E, kind: ErrorKind) -> Self { pub fn new<E: Into<color_eyre::eyre::Error> + std::fmt::Debug + 'static>(
source: E,
kind: ErrorKind,
) -> Self {
let debug = (typeid::of::<E>() == typeid::of::<color_eyre::eyre::Error>())
.then(|| eyre!("{source:?}"));
Error { Error {
source: source.into(), source: source.into(),
debug,
kind, kind,
revision: None, revision: None,
task: None, task: None,
@@ -209,11 +227,8 @@ impl Error {
} }
pub fn clone_output(&self) -> Self { pub fn clone_output(&self) -> Self {
Error { Error {
source: ErrorData { source: eyre!("{}", self.source),
details: format!("{}", self.source), debug: self.debug.as_ref().map(|e| eyre!("{e}")),
debug: format!("{:?}", self.source),
}
.into(),
kind: self.kind, kind: self.kind,
revision: self.revision.clone(), revision: self.revision.clone(),
task: None, task: None,
@@ -288,11 +303,6 @@ impl From<patch_db::Error> for Error {
Error::new(e, ErrorKind::Database) Error::new(e, ErrorKind::Database)
} }
} }
impl From<sqlx::Error> for Error {
fn from(e: sqlx::Error) -> Self {
Error::new(e, ErrorKind::Database)
}
}
impl From<ed25519_dalek::SignatureError> for Error { impl From<ed25519_dalek::SignatureError> for Error {
fn from(e: ed25519_dalek::SignatureError) -> Self { fn from(e: ed25519_dalek::SignatureError) -> Self {
Error::new(e, ErrorKind::InvalidSignature) Error::new(e, ErrorKind::InvalidSignature)
@@ -303,11 +313,6 @@ impl From<std::net::AddrParseError> for Error {
Error::new(e, ErrorKind::ParseNetAddress) Error::new(e, ErrorKind::ParseNetAddress)
} }
} }
impl From<torut::control::ConnError> for Error {
fn from(e: torut::control::ConnError) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<ipnet::AddrParseError> for Error { impl From<ipnet::AddrParseError> for Error {
fn from(e: ipnet::AddrParseError) -> Self { fn from(e: ipnet::AddrParseError) -> Self {
Error::new(e, ErrorKind::ParseNetAddress) Error::new(e, ErrorKind::ParseNetAddress)
@@ -353,8 +358,14 @@ impl From<reqwest::Error> for Error {
Error::new(e, kind) Error::new(e, kind)
} }
} }
impl From<torut::onion::OnionAddressParseError> for Error { #[cfg(feature = "arti")]
fn from(e: torut::onion::OnionAddressParseError) -> Self { impl From<arti_client::Error> for Error {
fn from(e: arti_client::Error) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<torut::control::ConnError> for Error {
fn from(e: torut::control::ConnError) -> Self {
Error::new(e, ErrorKind::Tor) Error::new(e, ErrorKind::Tor)
} }
} }
@@ -368,6 +379,21 @@ impl From<rustls::Error> for Error {
Error::new(e, ErrorKind::OpenSsl) Error::new(e, ErrorKind::OpenSsl)
} }
} }
impl From<lettre::error::Error> for Error {
fn from(e: lettre::error::Error) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<lettre::transport::smtp::Error> for Error {
fn from(e: lettre::transport::smtp::Error) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<lettre::address::AddressError> for Error {
fn from(e: lettre::address::AddressError) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<patch_db::value::Error> for Error { impl From<patch_db::value::Error> for Error {
fn from(value: patch_db::value::Error) -> Self { fn from(value: patch_db::value::Error) -> Self {
match value.kind { match value.kind {
@@ -549,25 +575,24 @@ where
impl<T, E> ResultExt<T, E> for Result<T, E> impl<T, E> ResultExt<T, E> for Result<T, E>
where where
color_eyre::eyre::Error: From<E>, color_eyre::eyre::Error: From<E>,
E: std::fmt::Debug + 'static,
{ {
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> { fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error { self.map_err(|e| Error::new(e, kind))
source: e.into(),
kind,
revision: None,
task: None,
})
} }
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> { fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| { self.map_err(|e| {
let (kind, ctx) = f(&e); let (kind, ctx) = f(&e);
let debug = (typeid::of::<E>() == typeid::of::<color_eyre::eyre::Error>())
.then(|| eyre!("{ctx}: {e:?}"));
let source = color_eyre::eyre::Error::from(e); let source = color_eyre::eyre::Error::from(e);
let ctx = format!("{}: {}", ctx, source); let with_ctx = format!("{ctx}: {source}");
let source = source.wrap_err(ctx); let source = source.wrap_err(with_ctx);
Error { Error {
kind, kind,
source, source,
debug,
revision: None, revision: None,
task: None, task: None,
} }
@@ -588,25 +613,24 @@ where
} }
impl<T> ResultExt<T, Error> for Result<T, Error> { impl<T> ResultExt<T, Error> for Result<T, Error> {
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> { fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error { self.map_err(|e| Error { kind, ..e })
source: e.source,
kind,
revision: e.revision,
task: e.task,
})
} }
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> { fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| { self.map_err(|e| {
let (kind, ctx) = f(&e); let (kind, ctx) = f(&e);
let source = e.source; let source = e.source;
let ctx = format!("{}: {}", ctx, source); let with_ctx = format!("{ctx}: {source}");
let source = source.wrap_err(ctx); let source = source.wrap_err(with_ctx);
let debug = e.debug.map(|e| {
let with_ctx = format!("{ctx}: {e}");
e.wrap_err(with_ctx)
});
Error { Error {
kind, kind,
source, source,
revision: e.revision, debug,
task: e.task, ..e
} }
}) })
} }

View File

@@ -0,0 +1,60 @@
use std::convert::Infallible;
use std::path::Path;
use std::str::FromStr;
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use yasi::InternedString;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)]
#[ts(type = "string")]
pub struct GatewayId(InternedString);
impl GatewayId {
pub fn as_str(&self) -> &str {
&*self.0
}
}
impl From<InternedString> for GatewayId {
fn from(value: InternedString) -> Self {
Self(value)
}
}
impl From<GatewayId> for InternedString {
fn from(value: GatewayId) -> Self {
value.0
}
}
impl FromStr for GatewayId {
type Err = Infallible;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(GatewayId(InternedString::intern(s)))
}
}
impl AsRef<GatewayId> for GatewayId {
fn as_ref(&self) -> &GatewayId {
self
}
}
impl std::fmt::Display for GatewayId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", &self.0)
}
}
impl AsRef<str> for GatewayId {
fn as_ref(&self) -> &str {
self.0.as_ref()
}
}
impl AsRef<Path> for GatewayId {
fn as_ref(&self) -> &Path {
self.0.as_ref()
}
}
impl<'de> Deserialize<'de> for GatewayId {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::de::Deserializer<'de>,
{
Ok(GatewayId(serde::Deserialize::deserialize(deserializer)?))
}
}

View File

@@ -60,20 +60,3 @@ impl AsRef<Path> for HostId {
self.0.as_ref().as_ref() self.0.as_ref().as_ref()
} }
} }
impl<'q> sqlx::Encode<'q, sqlx::Postgres> for HostId {
fn encode_by_ref(
&self,
buf: &mut <sqlx::Postgres as sqlx::Database>::ArgumentBuffer<'q>,
) -> Result<sqlx::encode::IsNull, sqlx::error::BoxDynError> {
<&str as sqlx::Encode<'q, sqlx::Postgres>>::encode_by_ref(&&**self, buf)
}
}
impl sqlx::Type<sqlx::Postgres> for HostId {
fn type_info() -> sqlx::postgres::PgTypeInfo {
<&str as sqlx::Type<sqlx::Postgres>>::type_info()
}
fn compatible(ty: &sqlx::postgres::PgTypeInfo) -> bool {
<&str as sqlx::Type<sqlx::Postgres>>::compatible(ty)
}
}

View File

@@ -6,6 +6,7 @@ use serde::{Deserialize, Deserializer, Serialize, Serializer};
use yasi::InternedString; use yasi::InternedString;
mod action; mod action;
mod gateway;
mod health_check; mod health_check;
mod host; mod host;
mod image; mod image;
@@ -16,6 +17,7 @@ mod service_interface;
mod volume; mod volume;
pub use action::ActionId; pub use action::ActionId;
pub use gateway::GatewayId;
pub use health_check::HealthCheckId; pub use health_check::HealthCheckId;
pub use host::HostId; pub use host::HostId;
pub use image::ImageId; pub use image::ImageId;
@@ -116,20 +118,3 @@ impl Serialize for Id {
serializer.serialize_str(self) serializer.serialize_str(self)
} }
} }
impl<'q> sqlx::Encode<'q, sqlx::Postgres> for Id {
fn encode_by_ref(
&self,
buf: &mut <sqlx::Postgres as sqlx::Database>::ArgumentBuffer<'q>,
) -> Result<sqlx::encode::IsNull, sqlx::error::BoxDynError> {
<&str as sqlx::Encode<'q, sqlx::Postgres>>::encode_by_ref(&&**self, buf)
}
}
impl sqlx::Type<sqlx::Postgres> for Id {
fn type_info() -> sqlx::postgres::PgTypeInfo {
<&str as sqlx::Type<sqlx::Postgres>>::type_info()
}
fn compatible(ty: &sqlx::postgres::PgTypeInfo) -> bool {
<&str as sqlx::Type<sqlx::Postgres>>::compatible(ty)
}
}

View File

@@ -87,20 +87,3 @@ impl Serialize for PackageId {
Serialize::serialize(&self.0, serializer) Serialize::serialize(&self.0, serializer)
} }
} }
impl<'q> sqlx::Encode<'q, sqlx::Postgres> for PackageId {
fn encode_by_ref(
&self,
buf: &mut <sqlx::Postgres as sqlx::Database>::ArgumentBuffer<'q>,
) -> Result<sqlx::encode::IsNull, sqlx::error::BoxDynError> {
<&str as sqlx::Encode<'q, sqlx::Postgres>>::encode_by_ref(&&**self, buf)
}
}
impl sqlx::Type<sqlx::Postgres> for PackageId {
fn type_info() -> sqlx::postgres::PgTypeInfo {
<&str as sqlx::Type<sqlx::Postgres>>::type_info()
}
fn compatible(ty: &sqlx::postgres::PgTypeInfo) -> bool {
<&str as sqlx::Type<sqlx::Postgres>>::compatible(ty)
}
}

View File

@@ -44,23 +44,6 @@ impl AsRef<Path> for ServiceInterfaceId {
self.0.as_ref().as_ref() self.0.as_ref().as_ref()
} }
} }
impl<'q> sqlx::Encode<'q, sqlx::Postgres> for ServiceInterfaceId {
fn encode_by_ref(
&self,
buf: &mut <sqlx::Postgres as sqlx::Database>::ArgumentBuffer<'q>,
) -> Result<sqlx::encode::IsNull, sqlx::error::BoxDynError> {
<&str as sqlx::Encode<'q, sqlx::Postgres>>::encode_by_ref(&&**self, buf)
}
}
impl sqlx::Type<sqlx::Postgres> for ServiceInterfaceId {
fn type_info() -> sqlx::postgres::PgTypeInfo {
<&str as sqlx::Type<sqlx::Postgres>>::type_info()
}
fn compatible(ty: &sqlx::postgres::PgTypeInfo) -> bool {
<&str as sqlx::Type<sqlx::Postgres>>::compatible(ty)
}
}
impl FromStr for ServiceInterfaceId { impl FromStr for ServiceInterfaceId {
type Err = <Id as FromStr>::Err; type Err = <Id as FromStr>::Err;
fn from_str(s: &str) -> Result<Self, Self::Err> { fn from_str(s: &str) -> Result<Self, Self::Err> {

View File

@@ -2,9 +2,21 @@
cd "$(dirname "${BASH_SOURCE[0]}")" cd "$(dirname "${BASH_SOURCE[0]}")"
source ./builder-alias.sh
set -ea set -ea
shopt -s expand_aliases shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
else
if [ "$PROFILE" != "debug"]; then
>&2 echo "Unknonw profile $PROFILE: falling back to debug..."
PROFILE=debug
fi
fi
if [ -z "$ARCH" ]; then if [ -z "$ARCH" ]; then
ARCH=$(uname -m) ARCH=$(uname -m)
fi fi
@@ -22,15 +34,12 @@ cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')" FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS="" RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable" RUSTFLAGS="--cfg tokio_unstable"
fi fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\"" echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\"" echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "apt-get update && apt-get install -y rsync && cd core && cargo test --release --features=test,$FEATURES --workspace --locked --target=$ARCH-unknown-linux-musl -- --skip export_bindings_ && chown \$UID:\$UID target" rust-zig-builder cargo test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=test,$FEATURES --workspace --locked -- --skip export_bindings_
if [ "$(ls -nd core/target | awk '{ print $3 }')" != "$UID" ]; then rust-zig-builder sh -c "chown -R $UID:$UID core/target && chown -R $UID:$UID /root/.cargo"
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi

View File

@@ -2,20 +2,20 @@
authors = ["Aiden McClelland <me@drbonez.dev>"] authors = ["Aiden McClelland <me@drbonez.dev>"]
description = "The core of StartOS" description = "The core of StartOS"
documentation = "https://docs.rs/start-os" documentation = "https://docs.rs/start-os"
edition = "2021" edition = "2024"
keywords = [ keywords = [
"self-hosted",
"raspberry-pi",
"privacy",
"bitcoin", "bitcoin",
"full-node", "full-node",
"lightning", "lightning",
"privacy",
"raspberry-pi",
"self-hosted",
] ]
license = "MIT"
name = "start-os" name = "start-os"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Start9Labs/start-os" repository = "https://github.com/Start9Labs/start-os"
version = "0.4.0-alpha.9" # VERSION_BUMP version = "0.4.0-alpha.13" # VERSION_BUMP
license = "MIT"
[lib] [lib]
name = "startos" name = "startos"
@@ -37,33 +37,65 @@ path = "src/main.rs"
name = "registrybox" name = "registrybox"
path = "src/main.rs" path = "src/main.rs"
[[bin]]
name = "tunnelbox"
path = "src/main.rs"
[features] [features]
cli = [] arti = [
container-runtime = ["procfs", "pty-process"] "arti-client",
daemon = ["mail-send"] "models/arti",
registry = [] "safelog",
default = ["cli", "daemon", "registry", "container-runtime"] "tor-cell",
dev = [] "tor-hscrypto",
unstable = ["console-subscriber", "tokio/tracing"] "tor-hsservice",
"tor-keymgr",
"tor-llcrypto",
"tor-proto",
"tor-rtcompat",
]
cli = ["cli-registry", "cli-startd", "cli-tunnel"]
cli-container = ["procfs", "pty-process"]
cli-registry = []
cli-startd = []
cli-tunnel = []
console = ["console-subscriber", "tokio/tracing"]
default = ["cli", "cli-container", "registry", "startd", "tunnel"]
dev = ["backtrace-on-stack-overflow"]
docker = [] docker = []
registry = []
startd = []
test = [] test = []
tunnel = []
unstable = ["backtrace-on-stack-overflow"]
[dependencies] [dependencies]
aes = { version = "0.7.5", features = ["ctr"] } aes = { version = "0.7.5", features = ["ctr"] }
arti-client = { version = "0.33", features = [
"compression",
"ephemeral-keystore",
"experimental-api",
"onion-service-client",
"onion-service-service",
"rustls",
"static",
"tokio",
], default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
async-acme = { version = "0.6.0", git = "https://github.com/dr-bonez/async-acme.git", features = [ async-acme = { version = "0.6.0", git = "https://github.com/dr-bonez/async-acme.git", features = [
"use_rustls", "use_rustls",
"use_tokio", "use_tokio",
] } ] }
async-compression = { version = "0.4.4", features = [ async-compression = { version = "0.4.32", features = [
"gzip",
"brotli", "brotli",
"gzip",
"tokio", "tokio",
"zstd",
] } ] }
async-stream = "0.3.5" async-stream = "0.3.5"
async-trait = "0.1.74" async-trait = "0.1.74"
axum = { version = "0.8.4", features = ["ws"] } axum = { version = "0.8.4", features = ["ws"] }
backtrace-on-stack-overflow = { version = "0.3.0", optional = true }
barrage = "0.2.3" barrage = "0.2.3"
backhand = "0.21.0"
base32 = "0.5.0" base32 = "0.5.0"
base64 = "0.22.1" base64 = "0.22.1"
base64ct = "1.6.0" base64ct = "1.6.0"
@@ -78,16 +110,19 @@ console-subscriber = { version = "0.4.1", optional = true }
const_format = "0.2.34" const_format = "0.2.34"
cookie = "0.18.0" cookie = "0.18.0"
cookie_store = "0.21.0" cookie_store = "0.21.0"
curve25519-dalek = "4.1.3"
der = { version = "0.7.9", features = ["derive", "pem"] } der = { version = "0.7.9", features = ["derive", "pem"] }
digest = "0.10.7" digest = "0.10.7"
divrem = "1.0.0" divrem = "1.0.0"
ed25519 = { version = "2.2.3", features = ["pkcs8", "pem", "alloc"] } dns-lookup = "2.1.0"
ed25519-dalek = { version = "2.1.1", features = [ ed25519 = { version = "2.2.3", features = ["alloc", "pem", "pkcs8"] }
ed25519-dalek = { version = "2.2.0", features = [
"digest",
"hazmat",
"pkcs8",
"rand_core",
"serde", "serde",
"zeroize", "zeroize",
"rand_core",
"digest",
"pkcs8",
] } ] }
ed25519-dalek-v1 = { package = "ed25519-dalek", version = "1" } ed25519-dalek-v1 = { package = "ed25519-dalek", version = "1" }
exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [ exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [
@@ -99,31 +134,34 @@ futures = "0.3.28"
gpt = "4.1.0" gpt = "4.1.0"
helpers = { path = "../helpers" } helpers = { path = "../helpers" }
hex = "0.4.3" hex = "0.4.3"
hickory-client = "0.25.2"
hickory-server = "0.25.2"
hmac = "0.12.1" hmac = "0.12.1"
http = "1.0.0" http = "1.0.0"
http-body-util = "0.1" http-body-util = "0.1"
hyper = { version = "1.5", features = ["server", "http1", "http2"] } hyper = { version = "1.5", features = ["http1", "http2", "server"] }
hyper-util = { version = "0.1.10", features = [ hyper-util = { version = "0.1.10", features = [
"http1",
"http2",
"server", "server",
"server-auto", "server-auto",
"server-graceful", "server-graceful",
"service", "service",
"http1",
"http2",
"tokio", "tokio",
] } ] }
id-pool = { version = "0.2.2", default-features = false, features = [ id-pool = { version = "0.2.2", default-features = false, features = [
"serde", "serde",
"u16", "u16",
] } ] }
imbl = "4.0.1" iddqd = "0.3.14"
imbl-value = "0.3.2" imbl = { version = "6", features = ["serde", "small-chunks"] }
imbl-value = { version = "0.4.3", features = ["ts-rs"] }
include_dir = { version = "0.7.3", features = ["metadata"] } include_dir = { version = "0.7.3", features = ["metadata"] }
indexmap = { version = "2.0.2", features = ["serde"] } indexmap = { version = "2.0.2", features = ["serde"] }
indicatif = { version = "0.17.7", features = ["tokio"] } indicatif = { version = "0.17.7", features = ["tokio"] }
inotify = "0.11.0"
integer-encoding = { version = "4.0.0", features = ["tokio_async"] } integer-encoding = { version = "4.0.0", features = ["tokio_async"] }
ipnet = { version = "2.8.0", features = ["serde"] } ipnet = { version = "2.8.0", features = ["serde"] }
iprange = { version = "0.6.7", features = ["serde"] }
isocountry = "0.3.2" isocountry = "0.3.2"
itertools = "0.14.0" itertools = "0.14.0"
jaq-core = "0.10.1" jaq-core = "0.10.1"
@@ -133,10 +171,20 @@ jsonpath_lib = { git = "https://github.com/Start9Labs/jsonpath.git" }
lazy_async_pool = "0.3.3" lazy_async_pool = "0.3.3"
lazy_format = "2.0" lazy_format = "2.0"
lazy_static = "1.4.0" lazy_static = "1.4.0"
lettre = { version = "0.11.18", default-features = false, features = [
"aws-lc-rs",
"builder",
"hostname",
"pool",
"rustls-platform-verifier",
"smtp-transport",
"tokio1-rustls",
] }
libc = "0.2.149" libc = "0.2.149"
log = "0.4.20" log = "0.4.20"
mio = "1"
mbrman = "0.6.0" mbrman = "0.6.0"
miette = { version = "7.6.0", features = ["fancy"] }
mio = "1"
models = { version = "*", path = "../models" } models = { version = "*", path = "../models" }
new_mime_guess = "4" new_mime_guess = "4"
nix = { version = "0.30.1", features = [ nix = { version = "0.30.1", features = [
@@ -150,8 +198,8 @@ nix = { version = "0.30.1", features = [
] } ] }
nom = "8.0.0" nom = "8.0.0"
num = "0.4.1" num = "0.4.1"
num_enum = "0.7.0"
num_cpus = "1.16.0" num_cpus = "1.16.0"
num_enum = "0.7.0"
once_cell = "1.19.0" once_cell = "1.19.0"
openssh-keys = "0.6.2" openssh-keys = "0.6.2"
openssl = { version = "0.10.57", features = ["vendored"] } openssl = { version = "0.10.57", features = ["vendored"] }
@@ -168,67 +216,79 @@ proptest = "1.3.1"
proptest-derive = "0.5.0" proptest-derive = "0.5.0"
pty-process = { version = "0.5.1", optional = true } pty-process = { version = "0.5.1", optional = true }
qrcode = "0.14.1" qrcode = "0.14.1"
rand = "0.9.0" r3bl_tui = "0.7.6"
rand = "0.9.2"
regex = "1.10.2" regex = "1.10.2"
reqwest = { version = "0.12.4", features = ["stream", "json", "socks"] } reqwest = { version = "0.12.4", features = ["json", "socks", "stream"] }
reqwest_cookie_store = "0.8.0" reqwest_cookie_store = "0.8.0"
rpassword = "7.2.0" rpassword = "7.2.0"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" } rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", rev = "068db90" }
rust-argon2 = "2.0.0" rust-argon2 = "2.0.0"
rustyline-async = "0.4.1" safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
semver = { version = "1.0.20", features = ["serde"] } semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] } serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" } serde_cbor = { package = "ciborium", version = "0.2.1" }
serde_json = "1.0" serde_json = "1.0"
serde_toml = { package = "toml", version = "0.8.2" } serde_toml = { package = "toml", version = "0.8.2" }
serde_urlencoded = "0.7" serde_urlencoded = "0.7"
serde_with = { version = "3.4.0", features = ["macros", "json"] } serde_with = { version = "3.4.0", features = ["json", "macros"] }
serde_yaml = { package = "serde_yml", version = "0.0.12" } serde_yaml = { package = "serde_yml", version = "0.0.12" }
sha-crypt = "0.5.0" sha-crypt = "0.5.0"
sha2 = "0.10.2" sha2 = "0.10.2"
shell-words = "1" shell-words = "1"
signal-hook = "0.3.17" signal-hook = "0.3.17"
simple-logging = "2.0.2" simple-logging = "2.0.2"
socket2 = "0.5.7" socket2 = { version = "0.6.0", features = ["all"] }
socks5-impl = { version = "0.7.2", features = ["client", "server"] }
sqlx = { version = "0.8.6", features = [ sqlx = { version = "0.8.6", features = [
"chrono",
"runtime-tokio-rustls",
"postgres", "postgres",
] } "runtime-tokio-rustls",
], default-features = false }
sscanf = "0.4.1" sscanf = "0.4.1"
ssh-key = { version = "0.6.2", features = ["ed25519"] } ssh-key = { version = "0.6.2", features = ["ed25519"] }
tar = "0.4.40" tar = "0.4.40"
termion = "4.0.5" termion = "4.0.5"
thiserror = "2.0.12"
textwrap = "0.16.1" textwrap = "0.16.1"
thiserror = "2.0.12"
tokio = { version = "1.38.1", features = ["full"] } tokio = { version = "1.38.1", features = ["full"] }
tokio-rustls = "0.26.0" tokio-rustls = "0.26.4"
tokio-socks = "0.5.1" tokio-stream = { version = "0.1.14", features = ["io-util", "net", "sync"] }
tokio-stream = { version = "0.1.14", features = ["io-util", "sync", "net"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" } tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] } tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] }
tokio-util = { version = "0.7.9", features = ["io"] } tokio-util = { version = "0.7.9", features = ["io"] }
torut = { git = "https://github.com/Start9Labs/torut.git", branch = "update/dependencies", features = [ tor-cell = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
"serialize", tor-hscrypto = { version = "0.33", features = [
] } "full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hsservice = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-keymgr = { version = "0.33", features = [
"ephemeral-keystore",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-llcrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-proto = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-rtcompat = { version = "0.33", features = [
"rustls",
"tokio",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
torut = "0.2.1"
tower-service = "0.3.3" tower-service = "0.3.3"
tracing = "0.1.39" tracing = "0.1.39"
tracing-error = "0.2.0" tracing-error = "0.2.0"
tracing-futures = "0.2.5" tracing-futures = "0.2.5"
tracing-journald = "0.3.0" tracing-journald = "0.3.0"
tracing-subscriber = { version = "0.3.17", features = ["env-filter"] } tracing-subscriber = { version = "0.3.17", features = ["env-filter"] }
trust-dns-server = "0.23.1" ts-rs = "9.0.1"
ts-rs = { git = "https://github.com/dr-bonez/ts-rs.git", branch = "feature/top-level-as" } # "8.1.0"
typed-builder = "0.21.0" typed-builder = "0.21.0"
unix-named-pipe = "0.2.0" unix-named-pipe = "0.2.0"
url = { version = "2.4.1", features = ["serde"] } url = { version = "2.4.1", features = ["serde"] }
urlencoding = "2.1.3" urlencoding = "2.1.3"
uuid = { version = "1.4.1", features = ["v4"] } uuid = { version = "1.4.1", features = ["v4"] }
visit-rs = "0.1.1"
x25519-dalek = { version = "2.0.1", features = ["static_secrets"] }
zbus = "5.1.1" zbus = "5.1.1"
zeroize = "1.6.0" zeroize = "1.6.0"
mail-send = { git = "https://github.com/dr-bonez/mail-send.git", branch = "main", optional = true }
rustls = "0.23.20"
rustls-pki-types = { version = "1.10.1", features = ["alloc"] }
[profile.test] [profile.test]
opt-level = 3 opt-level = 3

View File

@@ -1,12 +1,14 @@
use std::collections::BTreeMap;
use std::time::SystemTime; use std::time::SystemTime;
use imbl_value::InternedString;
use openssl::pkey::{PKey, Private}; use openssl::pkey::{PKey, Private};
use openssl::x509::X509; use openssl::x509::X509;
use torut::onion::TorSecretKeyV3;
use crate::db::model::DatabaseModel; use crate::db::model::DatabaseModel;
use crate::hostname::{generate_hostname, generate_id, Hostname}; use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::ssl::{generate_key, make_root_cert}; use crate::net::ssl::{generate_key, make_root_cert};
use crate::net::tor::TorSecretKey;
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::Pem; use crate::util::serde::Pem;
@@ -19,28 +21,28 @@ fn hash_password(password: &str) -> Result<String, Error> {
.with_kind(crate::ErrorKind::PasswordHashGeneration) .with_kind(crate::ErrorKind::PasswordHashGeneration)
} }
#[derive(Debug, Clone)] #[derive(Clone)]
pub struct AccountInfo { pub struct AccountInfo {
pub server_id: String, pub server_id: String,
pub hostname: Hostname, pub hostname: Hostname,
pub password: String, pub password: String,
pub tor_keys: Vec<TorSecretKeyV3>, pub tor_keys: Vec<TorSecretKey>,
pub root_ca_key: PKey<Private>, pub root_ca_key: PKey<Private>,
pub root_ca_cert: X509, pub root_ca_cert: X509,
pub ssh_key: ssh_key::PrivateKey, pub ssh_key: ssh_key::PrivateKey,
pub compat_s9pk_key: ed25519_dalek::SigningKey, pub developer_key: ed25519_dalek::SigningKey,
} }
impl AccountInfo { impl AccountInfo {
pub fn new(password: &str, start_time: SystemTime) -> Result<Self, Error> { pub fn new(password: &str, start_time: SystemTime) -> Result<Self, Error> {
let server_id = generate_id(); let server_id = generate_id();
let hostname = generate_hostname(); let hostname = generate_hostname();
let tor_key = vec![TorSecretKeyV3::generate()]; let tor_key = vec![TorSecretKey::generate()];
let root_ca_key = generate_key()?; let root_ca_key = generate_key()?;
let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?; let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?;
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random( let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
&mut ssh_key::rand_core::OsRng::default(), &mut ssh_key::rand_core::OsRng::default(),
)); ));
let compat_s9pk_key = let developer_key =
ed25519_dalek::SigningKey::generate(&mut ssh_key::rand_core::OsRng::default()); ed25519_dalek::SigningKey::generate(&mut ssh_key::rand_core::OsRng::default());
Ok(Self { Ok(Self {
server_id, server_id,
@@ -50,7 +52,7 @@ impl AccountInfo {
root_ca_key, root_ca_key,
root_ca_cert, root_ca_cert,
ssh_key, ssh_key,
compat_s9pk_key, developer_key,
}) })
} }
@@ -74,7 +76,7 @@ impl AccountInfo {
let root_ca_key = cert_store.as_root_key().de()?.0; let root_ca_key = cert_store.as_root_key().de()?.0;
let root_ca_cert = cert_store.as_root_cert().de()?.0; let root_ca_cert = cert_store.as_root_cert().de()?.0;
let ssh_key = db.as_private().as_ssh_privkey().de()?.0; let ssh_key = db.as_private().as_ssh_privkey().de()?.0;
let compat_s9pk_key = db.as_private().as_compat_s9pk_key().de()?.0; let compat_s9pk_key = db.as_private().as_developer_key().de()?.0;
Ok(Self { Ok(Self {
server_id, server_id,
@@ -84,7 +86,7 @@ impl AccountInfo {
root_ca_key, root_ca_key,
root_ca_cert, root_ca_cert,
ssh_key, ssh_key,
compat_s9pk_key, developer_key: compat_s9pk_key,
}) })
} }
@@ -103,27 +105,36 @@ impl AccountInfo {
&self &self
.tor_keys .tor_keys
.iter() .iter()
.map(|tor_key| tor_key.public().get_onion_address()) .map(|tor_key| tor_key.onion_address())
.collect(), .collect(),
)?; )?;
server_info.as_password_hash_mut().ser(&self.password)?;
db.as_private_mut().as_password_mut().ser(&self.password)?; db.as_private_mut().as_password_mut().ser(&self.password)?;
db.as_private_mut() db.as_private_mut()
.as_ssh_privkey_mut() .as_ssh_privkey_mut()
.ser(Pem::new_ref(&self.ssh_key))?; .ser(Pem::new_ref(&self.ssh_key))?;
db.as_private_mut() db.as_private_mut()
.as_compat_s9pk_key_mut() .as_developer_key_mut()
.ser(Pem::new_ref(&self.compat_s9pk_key))?; .ser(Pem::new_ref(&self.developer_key))?;
let key_store = db.as_private_mut().as_key_store_mut(); let key_store = db.as_private_mut().as_key_store_mut();
for tor_key in &self.tor_keys { for tor_key in &self.tor_keys {
key_store.as_onion_mut().insert_key(tor_key)?; key_store.as_onion_mut().insert_key(tor_key)?;
} }
let cert_store = key_store.as_local_certs_mut(); let cert_store = key_store.as_local_certs_mut();
cert_store if cert_store.as_root_cert().de()?.0 != self.root_ca_cert {
.as_root_key_mut() cert_store
.ser(Pem::new_ref(&self.root_ca_key))?; .as_root_key_mut()
cert_store .ser(Pem::new_ref(&self.root_ca_key))?;
.as_root_cert_mut() cert_store
.ser(Pem::new_ref(&self.root_ca_cert))?; .as_root_cert_mut()
.ser(Pem::new_ref(&self.root_ca_cert))?;
let int_key = crate::net::ssl::generate_key()?;
let int_cert =
crate::net::ssl::make_int_cert((&self.root_ca_key, &self.root_ca_cert), &int_key)?;
cert_store.as_int_key_mut().ser(&Pem(int_key))?;
cert_store.as_int_cert_mut().ser(&Pem(int_cert))?;
cert_store.as_leaves_mut().ser(&BTreeMap::new())?;
}
Ok(()) Ok(())
} }
@@ -131,4 +142,17 @@ impl AccountInfo {
self.password = hash_password(password)?; self.password = hash_password(password)?;
Ok(()) Ok(())
} }
pub fn hostnames(&self) -> impl IntoIterator<Item = InternedString> + Send + '_ {
[
self.hostname.no_dot_host_name(),
self.hostname.local_domain_name(),
]
.into_iter()
.chain(
self.tor_keys
.iter()
.map(|k| InternedString::from_display(&k.onion_address())),
)
}
} }

View File

@@ -4,7 +4,7 @@ use clap::{CommandFactory, FromArgMatches, Parser};
pub use models::ActionId; pub use models::ActionId;
use models::{PackageId, ReplayId}; use models::{PackageId, ReplayId};
use qrcode::QrCode; use qrcode::QrCode;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler}; use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tracing::instrument; use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
@@ -14,7 +14,7 @@ use crate::db::model::package::TaskSeverity;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::Guid; use crate::rpc_continuations::Guid;
use crate::util::serde::{ use crate::util::serde::{
display_serializable, HandlerExtSerde, StdinDeserializable, WithIoFormat, HandlerExtSerde, StdinDeserializable, WithIoFormat, display_serializable,
}; };
pub fn action_api<C: Context>() -> ParentHandler<C> { pub fn action_api<C: Context>() -> ParentHandler<C> {
@@ -52,6 +52,8 @@ pub fn action_api<C: Context>() -> ParentHandler<C> {
#[ts(export)] #[ts(export)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct ActionInput { pub struct ActionInput {
#[serde(default)]
pub event_id: Guid,
#[ts(type = "Record<string, unknown>")] #[ts(type = "Record<string, unknown>")]
pub spec: Value, pub spec: Value,
#[ts(type = "Record<string, unknown> | null")] #[ts(type = "Record<string, unknown> | null")]
@@ -270,6 +272,7 @@ pub fn display_action_result<T: Serialize>(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct RunActionParams { pub struct RunActionParams {
pub package_id: PackageId, pub package_id: PackageId,
pub event_id: Option<Guid>,
pub action_id: ActionId, pub action_id: ActionId,
#[ts(optional, type = "any")] #[ts(optional, type = "any")]
pub input: Option<Value>, pub input: Option<Value>,
@@ -278,6 +281,7 @@ pub struct RunActionParams {
#[derive(Parser)] #[derive(Parser)]
struct CliRunActionParams { struct CliRunActionParams {
pub package_id: PackageId, pub package_id: PackageId,
pub event_id: Option<Guid>,
pub action_id: ActionId, pub action_id: ActionId,
#[command(flatten)] #[command(flatten)]
pub input: StdinDeserializable<Option<Value>>, pub input: StdinDeserializable<Option<Value>>,
@@ -286,12 +290,14 @@ impl From<CliRunActionParams> for RunActionParams {
fn from( fn from(
CliRunActionParams { CliRunActionParams {
package_id, package_id,
event_id,
action_id, action_id,
input, input,
}: CliRunActionParams, }: CliRunActionParams,
) -> Self { ) -> Self {
Self { Self {
package_id, package_id,
event_id,
action_id, action_id,
input: input.0, input: input.0,
} }
@@ -331,6 +337,7 @@ pub async fn run_action(
ctx: RpcContext, ctx: RpcContext,
RunActionParams { RunActionParams {
package_id, package_id,
event_id,
action_id, action_id,
input, input,
}: RunActionParams, }: RunActionParams,
@@ -340,7 +347,11 @@ pub async fn run_action(
.await .await
.as_ref() .as_ref()
.or_not_found(lazy_format!("Manager for {}", package_id))? .or_not_found(lazy_format!("Manager for {}", package_id))?
.run_action(Guid::new(), action_id, input.unwrap_or_default()) .run_action(
event_id.unwrap_or_default(),
action_id,
input.unwrap_or_default(),
)
.await .await
.map(|res| res.map(ActionResult::upcast)) .map(|res| res.map(ActionResult::upcast))
} }

View File

@@ -3,29 +3,27 @@ use std::collections::BTreeMap;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use clap::Parser; use clap::Parser;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use imbl_value::{json, InternedString}; use imbl_value::{InternedString, json};
use itertools::Itertools; use itertools::Itertools;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{from_fn_async, Context, HandlerArgs, HandlerExt, ParentHandler}; use rpc_toolkit::{CallRemote, Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::io::AsyncWriteExt; use tokio::io::AsyncWriteExt;
use tracing::instrument; use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel;
use crate::middleware::auth::{ use crate::middleware::auth::{
AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken, LoginRes, AsLogoutSessionId, AuthContext, HasLoggedOutSessions, HashSessionToken, LoginRes,
}; };
use crate::prelude::*; use crate::prelude::*;
use crate::util::crypto::EncryptedWire; use crate::util::crypto::EncryptedWire;
use crate::util::io::create_file_mod; use crate::util::io::create_file_mod;
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat}; use crate::util::serde::{HandlerExtSerde, WithIoFormat, display_serializable};
use crate::{ensure_code, Error, ResultExt}; use crate::{Error, ResultExt, ensure_code};
#[derive(Debug, Clone, Default, Deserialize, Serialize, TS)] #[derive(Debug, Clone, Default, Deserialize, Serialize, TS)]
#[ts(as = "BTreeMap::<String, Session>")]
pub struct Sessions(pub BTreeMap<InternedString, Session>); pub struct Sessions(pub BTreeMap<InternedString, Session>);
impl Sessions { impl Sessions {
pub fn new() -> Self { pub fn new() -> Self {
@@ -112,31 +110,34 @@ impl std::str::FromStr for PasswordType {
}) })
} }
} }
pub fn auth<C: Context>() -> ParentHandler<C> { pub fn auth<C: Context, AC: AuthContext>() -> ParentHandler<C>
where
CliContext: CallRemote<AC>,
{
ParentHandler::new() ParentHandler::new()
.subcommand( .subcommand(
"login", "login",
from_fn_async(login_impl) from_fn_async(login_impl::<AC>)
.with_metadata("login", Value::Bool(true)) .with_metadata("login", Value::Bool(true))
.no_cli(), .no_cli(),
) )
.subcommand( .subcommand(
"login", "login",
from_fn_async(cli_login) from_fn_async(cli_login::<AC>)
.no_display() .no_display()
.with_about("Log in to StartOS server"), .with_about("Log in a new auth session"),
) )
.subcommand( .subcommand(
"logout", "logout",
from_fn_async(logout) from_fn_async(logout::<AC>)
.with_metadata("get_session", Value::Bool(true)) .with_metadata("get_session", Value::Bool(true))
.no_display() .no_display()
.with_about("Log out of StartOS server") .with_about("Log out of current auth session")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"session", "session",
session::<C>().with_about("List or kill StartOS sessions"), session::<C, AC>().with_about("List or kill auth sessions"),
) )
.subcommand( .subcommand(
"reset-password", "reset-password",
@@ -146,7 +147,7 @@ pub fn auth<C: Context>() -> ParentHandler<C> {
"reset-password", "reset-password",
from_fn_async(cli_reset_password) from_fn_async(cli_reset_password)
.no_display() .no_display()
.with_about("Reset StartOS password"), .with_about("Reset password"),
) )
.subcommand( .subcommand(
"get-pubkey", "get-pubkey",
@@ -172,17 +173,20 @@ fn gen_pwd() {
} }
#[instrument(skip_all)] #[instrument(skip_all)]
async fn cli_login( async fn cli_login<C: AuthContext>(
HandlerArgs { HandlerArgs {
context: ctx, context: ctx,
parent_method, parent_method,
method, method,
.. ..
}: HandlerArgs<CliContext>, }: HandlerArgs<CliContext>,
) -> Result<(), RpcError> { ) -> Result<(), RpcError>
where
CliContext: CallRemote<C>,
{
let password = rpassword::prompt_password("Password: ")?; let password = rpassword::prompt_password("Password: ")?;
ctx.call_remote::<RpcContext>( ctx.call_remote::<C>(
&parent_method.into_iter().chain(method).join("."), &parent_method.into_iter().chain(method).join("."),
json!({ json!({
"password": password, "password": password,
@@ -210,39 +214,31 @@ pub fn check_password(hash: &str, password: &str) -> Result<(), Error> {
Ok(()) Ok(())
} }
pub fn check_password_against_db(db: &DatabaseModel, password: &str) -> Result<(), Error> {
let pw_hash = db.as_private().as_password().de()?;
check_password(&pw_hash, password)?;
Ok(())
}
#[derive(Deserialize, Serialize, TS)] #[derive(Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[ts(export)] #[ts(export)]
pub struct LoginParams { pub struct LoginParams {
password: Option<PasswordType>, password: String,
#[ts(skip)] #[ts(skip)]
#[serde(rename = "__auth_userAgent")] // from Auth middleware #[serde(rename = "__Auth_userAgent")] // from Auth middleware
user_agent: Option<String>, user_agent: Option<String>,
#[serde(default)] #[serde(default)]
ephemeral: bool, ephemeral: bool,
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn login_impl( pub async fn login_impl<C: AuthContext>(
ctx: RpcContext, ctx: C,
LoginParams { LoginParams {
password, password,
user_agent, user_agent,
ephemeral, ephemeral,
}: LoginParams, }: LoginParams,
) -> Result<LoginRes, Error> { ) -> Result<LoginRes, Error> {
let password = password.unwrap_or_default().decrypt(&ctx)?;
let tok = if ephemeral { let tok = if ephemeral {
check_password_against_db(&ctx.db.peek().await, &password)?; C::check_password(&ctx.db().peek().await, &password)?;
let hash_token = HashSessionToken::new(); let hash_token = HashSessionToken::new();
ctx.ephemeral_sessions.mutate(|s| { ctx.ephemeral_sessions().mutate(|s| {
s.0.insert( s.0.insert(
hash_token.hashed().clone(), hash_token.hashed().clone(),
Session { Session {
@@ -254,11 +250,11 @@ pub async fn login_impl(
}); });
Ok(hash_token.to_login_res()) Ok(hash_token.to_login_res())
} else { } else {
ctx.db ctx.db()
.mutate(|db| { .mutate(|db| {
check_password_against_db(db, &password)?; C::check_password(db, &password)?;
let hash_token = HashSessionToken::new(); let hash_token = HashSessionToken::new();
db.as_private_mut().as_sessions_mut().insert( C::access_sessions(db).insert(
hash_token.hashed(), hash_token.hashed(),
&Session { &Session {
logged_in: Utc::now(), logged_in: Utc::now(),
@@ -273,12 +269,7 @@ pub async fn login_impl(
.result .result
}?; }?;
if tokio::fs::metadata("/media/startos/config/overlay/etc/shadow") ctx.post_login_hook(&password).await?;
.await
.is_err()
{
write_shadow(&password).await?;
}
Ok(tok) Ok(tok)
} }
@@ -288,12 +279,12 @@ pub async fn login_impl(
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
pub struct LogoutParams { pub struct LogoutParams {
#[ts(skip)] #[ts(skip)]
#[serde(rename = "__auth_session")] // from Auth middleware #[serde(rename = "__Auth_session")] // from Auth middleware
session: InternedString, session: InternedString,
} }
pub async fn logout( pub async fn logout<C: AuthContext>(
ctx: RpcContext, ctx: C,
LogoutParams { session }: LogoutParams, LogoutParams { session }: LogoutParams,
) -> Result<Option<HasLoggedOutSessions>, Error> { ) -> Result<Option<HasLoggedOutSessions>, Error> {
Ok(Some( Ok(Some(
@@ -321,22 +312,25 @@ pub struct SessionList {
sessions: Sessions, sessions: Sessions,
} }
pub fn session<C: Context>() -> ParentHandler<C> { pub fn session<C: Context, AC: AuthContext>() -> ParentHandler<C>
where
CliContext: CallRemote<AC>,
{
ParentHandler::new() ParentHandler::new()
.subcommand( .subcommand(
"list", "list",
from_fn_async(list) from_fn_async(list::<AC>)
.with_metadata("get_session", Value::Bool(true)) .with_metadata("get_session", Value::Bool(true))
.with_display_serializable() .with_display_serializable()
.with_custom_display_fn(|handle, result| display_sessions(handle.params, result)) .with_custom_display_fn(|handle, result| display_sessions(handle.params, result))
.with_about("Display all server sessions") .with_about("Display all auth sessions")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand( .subcommand(
"kill", "kill",
from_fn_async(kill) from_fn_async(kill::<AC>)
.no_display() .no_display()
.with_about("Terminate existing server session(s)") .with_about("Terminate existing auth session(s)")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
} }
@@ -379,18 +373,18 @@ fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) -> Resul
pub struct ListParams { pub struct ListParams {
#[arg(skip)] #[arg(skip)]
#[ts(skip)] #[ts(skip)]
#[serde(rename = "__auth_session")] // from Auth middleware #[serde(rename = "__Auth_session")] // from Auth middleware
session: Option<InternedString>, session: Option<InternedString>,
} }
// #[command(display(display_sessions))] // #[command(display(display_sessions))]
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn list( pub async fn list<C: AuthContext>(
ctx: RpcContext, ctx: C,
ListParams { session, .. }: ListParams, ListParams { session, .. }: ListParams,
) -> Result<SessionList, Error> { ) -> Result<SessionList, Error> {
let mut sessions = ctx.db.peek().await.into_private().into_sessions().de()?; let mut sessions = C::access_sessions(&mut ctx.db().peek().await).de()?;
ctx.ephemeral_sessions.peek(|s| { ctx.ephemeral_sessions().peek(|s| {
sessions sessions
.0 .0
.extend(s.0.iter().map(|(k, v)| (k.clone(), v.clone()))) .extend(s.0.iter().map(|(k, v)| (k.clone(), v.clone())))
@@ -424,7 +418,7 @@ pub struct KillParams {
} }
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn kill(ctx: RpcContext, KillParams { ids }: KillParams) -> Result<(), Error> { pub async fn kill<C: AuthContext>(ctx: C, KillParams { ids }: KillParams) -> Result<(), Error> {
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId::new), &ctx).await?; HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId::new), &ctx).await?;
Ok(()) Ok(())
} }
@@ -480,30 +474,19 @@ pub async fn reset_password_impl(
let old_password = old_password.unwrap_or_default().decrypt(&ctx)?; let old_password = old_password.unwrap_or_default().decrypt(&ctx)?;
let new_password = new_password.unwrap_or_default().decrypt(&ctx)?; let new_password = new_password.unwrap_or_default().decrypt(&ctx)?;
let mut account = ctx.account.write().await; let account = ctx.account.mutate(|account| {
if !argon2::verify_encoded(&account.password, old_password.as_bytes()) if !argon2::verify_encoded(&account.password, old_password.as_bytes())
.with_kind(crate::ErrorKind::IncorrectPassword)? .with_kind(crate::ErrorKind::IncorrectPassword)?
{ {
return Err(Error::new( return Err(Error::new(
eyre!("Incorrect Password"), eyre!("Incorrect Password"),
crate::ErrorKind::IncorrectPassword, crate::ErrorKind::IncorrectPassword,
)); ));
} }
account.set_password(&new_password)?; account.set_password(&new_password)?;
let account_password = &account.password; Ok(account.clone())
let account = account.clone(); })?;
ctx.db ctx.db.mutate(|d| account.save(d)).await.result
.mutate(|d| {
d.as_public_mut()
.as_server_info_mut()
.as_password_hash_mut()
.ser(account_password)?;
account.save(d)?;
Ok(())
})
.await
.result
} }
#[instrument(skip_all)] #[instrument(skip_all)]

View File

@@ -13,9 +13,8 @@ use tokio::io::AsyncWriteExt;
use tracing::instrument; use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use super::target::{BackupTargetId, PackageBackupInfo};
use super::PackageBackupReport; use super::PackageBackupReport;
use crate::auth::check_password_against_db; use super::target::{BackupTargetId, PackageBackupInfo};
use crate::backup::os::OsBackup; use crate::backup::os::OsBackup;
use crate::backup::{BackupReport, ServerBackupReport}; use crate::backup::{BackupReport, ServerBackupReport};
use crate::context::RpcContext; use crate::context::RpcContext;
@@ -24,7 +23,8 @@ use crate::db::model::{Database, DatabaseModel};
use crate::disk::mount::backup::BackupMountGuard; use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite; use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard}; use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::notifications::{notify, NotificationLevel}; use crate::middleware::auth::AuthContext;
use crate::notifications::{NotificationLevel, notify};
use crate::prelude::*; use crate::prelude::*;
use crate::util::io::dir_copy; use crate::util::io::dir_copy;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
@@ -170,7 +170,7 @@ pub async fn backup_all(
let ((fs, package_ids, server_id), status_guard) = ( let ((fs, package_ids, server_id), status_guard) = (
ctx.db ctx.db
.mutate(|db| { .mutate(|db| {
check_password_against_db(db, &password)?; RpcContext::check_password(db, &password)?;
let fs = target_id.load(db)?; let fs = target_id.load(db)?;
let package_ids = if let Some(ids) = package_ids { let package_ids = if let Some(ids) = package_ids {
ids.into_iter().collect() ids.into_iter().collect()
@@ -317,7 +317,7 @@ async fn perform_backup(
.with_kind(ErrorKind::Filesystem)?; .with_kind(ErrorKind::Filesystem)?;
os_backup_file os_backup_file
.write_all(&IoFormat::Json.to_vec(&OsBackup { .write_all(&IoFormat::Json.to_vec(&OsBackup {
account: ctx.account.read().await.clone(), account: ctx.account.peek(|a| a.clone()),
ui, ui,
})?) })?)
.await?; .await?;
@@ -342,7 +342,7 @@ async fn perform_backup(
let timestamp = Utc::now(); let timestamp = Utc::now();
backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into(); backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into();
backup_guard.unencrypted_metadata.hostname = ctx.account.read().await.hostname.clone(); backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.clone());
backup_guard.unencrypted_metadata.timestamp = timestamp.clone(); backup_guard.unencrypted_metadata.timestamp = timestamp.clone();
backup_guard.metadata.version = crate::version::Current::default().semver().into(); backup_guard.metadata.version = crate::version::Current::default().semver().into();
backup_guard.metadata.timestamp = Some(timestamp); backup_guard.metadata.timestamp = Some(timestamp);

View File

@@ -3,7 +3,7 @@ use std::collections::BTreeMap;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use models::{HostId, PackageId}; use models::{HostId, PackageId};
use reqwest::Url; use reqwest::Url;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler}; use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::context::CliContext; use crate::context::CliContext;

View File

@@ -4,10 +4,10 @@ use openssl::x509::X509;
use patch_db::Value; use patch_db::Value;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ssh_key::private::Ed25519Keypair; use ssh_key::private::Ed25519Keypair;
use torut::onion::TorSecretKeyV3;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::hostname::{generate_hostname, generate_id, Hostname}; use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::tor::TorSecretKey;
use crate::prelude::*; use crate::prelude::*;
use crate::util::crypto::ed25519_expand_key; use crate::util::crypto::ed25519_expand_key;
use crate::util::serde::{Base32, Base64, Pem}; use crate::util::serde::{Base32, Base64, Pem};
@@ -36,7 +36,7 @@ impl<'de> Deserialize<'de> for OsBackup {
v => { v => {
return Err(serde::de::Error::custom(&format!( return Err(serde::de::Error::custom(&format!(
"Unknown backup version {v}" "Unknown backup version {v}"
))) )));
} }
}) })
} }
@@ -85,8 +85,11 @@ impl OsBackupV0 {
&mut ssh_key::rand_core::OsRng::default(), &mut ssh_key::rand_core::OsRng::default(),
ssh_key::Algorithm::Ed25519, ssh_key::Algorithm::Ed25519,
)?, )?,
tor_keys: vec![TorSecretKeyV3::from(self.tor_key.0)], tor_keys: TorSecretKey::from_bytes(self.tor_key.0)
compat_s9pk_key: ed25519_dalek::SigningKey::generate( .ok()
.into_iter()
.collect(),
developer_key: ed25519_dalek::SigningKey::generate(
&mut ssh_key::rand_core::OsRng::default(), &mut ssh_key::rand_core::OsRng::default(),
), ),
}, },
@@ -116,8 +119,11 @@ impl OsBackupV1 {
root_ca_key: self.root_ca_key.0, root_ca_key: self.root_ca_key.0,
root_ca_cert: self.root_ca_cert.0, root_ca_cert: self.root_ca_cert.0,
ssh_key: ssh_key::PrivateKey::from(Ed25519Keypair::from_seed(&self.net_key.0)), ssh_key: ssh_key::PrivateKey::from(Ed25519Keypair::from_seed(&self.net_key.0)),
tor_keys: vec![TorSecretKeyV3::from(ed25519_expand_key(&self.net_key.0))], tor_keys: TorSecretKey::from_bytes(ed25519_expand_key(&self.net_key.0))
compat_s9pk_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key), .ok()
.into_iter()
.collect(),
developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
}, },
ui: self.ui, ui: self.ui,
} }
@@ -134,7 +140,7 @@ struct OsBackupV2 {
root_ca_key: Pem<PKey<Private>>, // PEM Encoded OpenSSL Key root_ca_key: Pem<PKey<Private>>, // PEM Encoded OpenSSL Key
root_ca_cert: Pem<X509>, // PEM Encoded OpenSSL X509 Certificate root_ca_cert: Pem<X509>, // PEM Encoded OpenSSL X509 Certificate
ssh_key: Pem<ssh_key::PrivateKey>, // PEM Encoded OpenSSH Key ssh_key: Pem<ssh_key::PrivateKey>, // PEM Encoded OpenSSH Key
tor_keys: Vec<TorSecretKeyV3>, // Base64 Encoded Ed25519 Expanded Secret Key tor_keys: Vec<TorSecretKey>, // Base64 Encoded Ed25519 Expanded Secret Key
compat_s9pk_key: Pem<ed25519_dalek::SigningKey>, // PEM Encoded ED25519 Key compat_s9pk_key: Pem<ed25519_dalek::SigningKey>, // PEM Encoded ED25519 Key
ui: Value, // JSON Value ui: Value, // JSON Value
} }
@@ -149,7 +155,7 @@ impl OsBackupV2 {
root_ca_cert: self.root_ca_cert.0, root_ca_cert: self.root_ca_cert.0,
ssh_key: self.ssh_key.0, ssh_key: self.ssh_key.0,
tor_keys: self.tor_keys, tor_keys: self.tor_keys,
compat_s9pk_key: self.compat_s9pk_key.0, developer_key: self.compat_s9pk_key.0,
}, },
ui: self.ui, ui: self.ui,
} }
@@ -162,7 +168,7 @@ impl OsBackupV2 {
root_ca_cert: Pem(backup.account.root_ca_cert.clone()), root_ca_cert: Pem(backup.account.root_ca_cert.clone()),
ssh_key: Pem(backup.account.ssh_key.clone()), ssh_key: Pem(backup.account.ssh_key.clone()),
tor_keys: backup.account.tor_keys.clone(), tor_keys: backup.account.tor_keys.clone(),
compat_s9pk_key: Pem(backup.account.compat_s9pk_key.clone()), compat_s9pk_key: Pem(backup.account.developer_key.clone()),
ui: backup.ui.clone(), ui: backup.ui.clone(),
} }
} }

View File

@@ -2,7 +2,7 @@ use std::collections::BTreeMap;
use std::sync::Arc; use std::sync::Arc;
use clap::Parser; use clap::Parser;
use futures::{stream, StreamExt}; use futures::{StreamExt, stream};
use models::PackageId; use models::PackageId;
use patch_db::json_ptr::ROOT; use patch_db::json_ptr::ROOT;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -11,6 +11,7 @@ use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use super::target::BackupTargetId; use super::target::BackupTargetId;
use crate::PLATFORM;
use crate::backup::os::OsBackup; use crate::backup::os::OsBackup;
use crate::context::setup::SetupResult; use crate::context::setup::SetupResult;
use crate::context::{RpcContext, SetupContext}; use crate::context::{RpcContext, SetupContext};
@@ -26,7 +27,6 @@ use crate::service::service_map::DownloadInstallFuture;
use crate::setup::SetupExecuteProgress; use crate::setup::SetupExecuteProgress;
use crate::system::sync_kiosk; use crate::system::sync_kiosk;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::PLATFORM;
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]

View File

@@ -4,17 +4,17 @@ use std::path::{Path, PathBuf};
use clap::Parser; use clap::Parser;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use imbl_value::InternedString; use imbl_value::InternedString;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler}; use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use super::{BackupTarget, BackupTargetId}; use super::{BackupTarget, BackupTargetId};
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel; use crate::db::model::DatabaseModel;
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::filesystem::ReadOnly; use crate::disk::mount::filesystem::ReadOnly;
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard}; use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::{recovery_info, StartOsRecoveryInfo}; use crate::disk::util::{StartOsRecoveryInfo, recovery_info};
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::KeyVal; use crate::util::serde::KeyVal;

View File

@@ -2,15 +2,15 @@ use std::collections::BTreeMap;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use clap::builder::ValueParserFactory;
use clap::Parser; use clap::Parser;
use clap::builder::ValueParserFactory;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use digest::generic_array::GenericArray;
use digest::OutputSizeUser; use digest::OutputSizeUser;
use digest::generic_array::GenericArray;
use exver::Version; use exver::Version;
use imbl_value::InternedString; use imbl_value::InternedString;
use models::{FromStrParser, PackageId}; use models::{FromStrParser, PackageId};
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler}; use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::Sha256; use sha2::Sha256;
use tokio::sync::Mutex; use tokio::sync::Mutex;
@@ -27,10 +27,10 @@ use crate::disk::mount::filesystem::{FileSystem, MountType, ReadWrite};
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard}; use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::PartitionInfo; use crate::disk::util::PartitionInfo;
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::{
deserialize_from_str, display_serializable, serialize_display, HandlerExtSerde, WithIoFormat,
};
use crate::util::VersionString; use crate::util::VersionString;
use crate::util::serde::{
HandlerExtSerde, WithIoFormat, deserialize_from_str, display_serializable, serialize_display,
};
pub mod cifs; pub mod cifs;

View File

@@ -2,41 +2,64 @@ use std::collections::VecDeque;
use std::ffi::OsString; use std::ffi::OsString;
use std::path::Path; use std::path::Path;
#[cfg(feature = "container-runtime")] #[cfg(feature = "cli-container")]
pub mod container_cli; pub mod container_cli;
pub mod deprecated; pub mod deprecated;
#[cfg(feature = "registry")] #[cfg(any(feature = "registry", feature = "cli-registry"))]
pub mod registry; pub mod registry;
#[cfg(feature = "cli")] #[cfg(feature = "cli")]
pub mod start_cli; pub mod start_cli;
#[cfg(feature = "daemon")] #[cfg(feature = "startd")]
pub mod start_init; pub mod start_init;
#[cfg(feature = "daemon")] #[cfg(feature = "startd")]
pub mod startd; pub mod startd;
#[cfg(any(feature = "tunnel", feature = "cli-tunnel"))]
pub mod tunnel;
fn select_executable(name: &str) -> Option<fn(VecDeque<OsString>)> { fn select_executable(name: &str) -> Option<fn(VecDeque<OsString>)> {
match name { match name {
#[cfg(feature = "cli")] #[cfg(feature = "startd")]
"start-cli" => Some(start_cli::main),
#[cfg(feature = "container-runtime")]
"start-cli" => Some(container_cli::main),
#[cfg(feature = "daemon")]
"startd" => Some(startd::main), "startd" => Some(startd::main),
#[cfg(feature = "registry")] #[cfg(feature = "startd")]
"registry" => Some(registry::main),
"embassy-cli" => Some(|_| deprecated::renamed("embassy-cli", "start-cli")),
"embassy-sdk" => Some(|_| deprecated::renamed("embassy-sdk", "start-sdk")),
"embassyd" => Some(|_| deprecated::renamed("embassyd", "startd")), "embassyd" => Some(|_| deprecated::renamed("embassyd", "startd")),
#[cfg(feature = "startd")]
"embassy-init" => Some(|_| deprecated::removed("embassy-init")), "embassy-init" => Some(|_| deprecated::removed("embassy-init")),
#[cfg(feature = "cli-startd")]
"start-cli" => Some(start_cli::main),
#[cfg(feature = "cli-startd")]
"embassy-cli" => Some(|_| deprecated::renamed("embassy-cli", "start-cli")),
#[cfg(feature = "cli-startd")]
"embassy-sdk" => Some(|_| deprecated::removed("embassy-sdk")),
#[cfg(feature = "cli-container")]
"start-container" => Some(container_cli::main),
#[cfg(feature = "registry")]
"start-registryd" => Some(registry::main),
#[cfg(feature = "cli-registry")]
"start-registry" => Some(registry::cli),
#[cfg(feature = "tunnel")]
"start-tunneld" => Some(tunnel::main),
#[cfg(feature = "cli-tunnel")]
"start-tunnel" => Some(tunnel::cli),
"contents" => Some(|_| { "contents" => Some(|_| {
#[cfg(feature = "cli")] #[cfg(feature = "startd")]
println!("start-cli");
#[cfg(feature = "container-runtime")]
println!("start-cli (container)");
#[cfg(feature = "daemon")]
println!("startd"); println!("startd");
#[cfg(feature = "cli-startd")]
println!("start-cli");
#[cfg(feature = "cli-container")]
println!("start-container");
#[cfg(feature = "registry")] #[cfg(feature = "registry")]
println!("registry"); println!("start-registryd");
#[cfg(feature = "cli-registry")]
println!("start-registry");
#[cfg(feature = "tunnel")]
println!("start-tunneld");
#[cfg(feature = "cli-tunnel")]
println!("start-tunnel");
}), }),
_ => None, _ => None,
} }

View File

@@ -2,20 +2,26 @@ use std::ffi::OsString;
use clap::Parser; use clap::Parser;
use futures::FutureExt; use futures::FutureExt;
use rpc_toolkit::CliApp;
use tokio::signal::unix::signal; use tokio::signal::unix::signal;
use tracing::instrument; use tracing::instrument;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::net::web_server::{Acceptor, WebServer}; use crate::net::web_server::{Acceptor, WebServer};
use crate::prelude::*; use crate::prelude::*;
use crate::registry::context::{RegistryConfig, RegistryContext}; use crate::registry::context::{RegistryConfig, RegistryContext};
use crate::registry::registry_router;
use crate::util::logger::LOGGER; use crate::util::logger::LOGGER;
#[instrument(skip_all)] #[instrument(skip_all)]
async fn inner_main(config: &RegistryConfig) -> Result<(), Error> { async fn inner_main(config: &RegistryConfig) -> Result<(), Error> {
let server = async { let server = async {
let ctx = RegistryContext::init(config).await?; let ctx = RegistryContext::init(config).await?;
let mut server = WebServer::new(Acceptor::bind([ctx.listen]).await?); let server = WebServer::new(
server.serve_registry(ctx.clone()); Acceptor::bind([ctx.listen]).await?,
registry_router(ctx.clone()),
);
let mut shutdown_recv = ctx.shutdown.subscribe(); let mut shutdown_recv = ctx.shutdown.subscribe();
@@ -85,3 +91,30 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
} }
} }
} }
pub fn cli(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::registry::registry_api(),
)
.run(args)
{
match e.data {
Some(serde_json::Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(serde_json::Value::Object(o)) => {
if let Some(serde_json::Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(serde_json::Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
}

View File

@@ -3,8 +3,8 @@ use std::ffi::OsString;
use rpc_toolkit::CliApp; use rpc_toolkit::CliApp;
use serde_json::Value; use serde_json::Value;
use crate::context::config::ClientConfig;
use crate::context::CliContext; use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::util::logger::LOGGER; use crate::util::logger::LOGGER;
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
@@ -17,7 +17,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
if let Err(e) = CliApp::new( if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?), |cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::expanded_api(), crate::main_api(),
) )
.run(args) .run(args)
{ {

View File

@@ -1,4 +1,3 @@
use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use tokio::process::Command; use tokio::process::Command;
@@ -7,12 +6,13 @@ use tracing::instrument;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases; use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext};
use crate::disk::REPAIR_DISK_PATH;
use crate::disk::fsck::RepairStrategy; use crate::disk::fsck::RepairStrategy;
use crate::disk::main::DEFAULT_PASSWORD; use crate::disk::main::DEFAULT_PASSWORD;
use crate::disk::REPAIR_DISK_PATH;
use crate::firmware::{check_for_firmware_update, update_firmware}; use crate::firmware::{check_for_firmware_update, update_firmware};
use crate::init::{InitPhases, STANDBY_MODE_PATH}; use crate::init::{InitPhases, STANDBY_MODE_PATH};
use crate::net::web_server::{UpgradableListener, WebServer}; use crate::net::gateway::UpgradableListener;
use crate::net::web_server::WebServer;
use crate::prelude::*; use crate::prelude::*;
use crate::progress::FullProgressTracker; use crate::progress::FullProgressTracker;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
@@ -38,7 +38,7 @@ async fn setup_or_init(
let mut update_phase = handle.add_phase("Updating Firmware".into(), Some(10)); let mut update_phase = handle.add_phase("Updating Firmware".into(), Some(10));
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1)); let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1));
server.serve_init(init_ctx); server.serve_ui_for(init_ctx);
update_phase.start(); update_phase.start();
if let Err(e) = update_firmware(firmware).await { if let Err(e) = update_firmware(firmware).await {
@@ -48,7 +48,7 @@ async fn setup_or_init(
update_phase.complete(); update_phase.complete();
reboot_phase.start(); reboot_phase.start();
return Ok(Err(Shutdown { return Ok(Err(Shutdown {
export_args: None, disk_guid: None,
restart: true, restart: true,
})); }));
} }
@@ -94,7 +94,7 @@ async fn setup_or_init(
let ctx = InstallContext::init().await?; let ctx = InstallContext::init().await?;
server.serve_install(ctx.clone()); server.serve_ui_for(ctx.clone());
ctx.shutdown ctx.shutdown
.subscribe() .subscribe()
@@ -103,7 +103,7 @@ async fn setup_or_init(
.expect("context dropped"); .expect("context dropped");
return Ok(Err(Shutdown { return Ok(Err(Shutdown {
export_args: None, disk_guid: None,
restart: true, restart: true,
})); }));
} }
@@ -114,10 +114,12 @@ async fn setup_or_init(
{ {
let ctx = SetupContext::init(server, config)?; let ctx = SetupContext::init(server, config)?;
server.serve_setup(ctx.clone()); server.serve_ui_for(ctx.clone());
let mut shutdown = ctx.shutdown.subscribe(); let mut shutdown = ctx.shutdown.subscribe();
shutdown.recv().await.expect("context dropped"); if let Some(shutdown) = shutdown.recv().await.expect("context dropped") {
return Ok(Err(shutdown));
}
tokio::task::yield_now().await; tokio::task::yield_now().await;
if let Err(e) = Command::new("killall") if let Err(e) = Command::new("killall")
@@ -136,7 +138,7 @@ async fn setup_or_init(
return Err(Error::new( return Err(Error::new(
eyre!("Setup mode exited before setup completed"), eyre!("Setup mode exited before setup completed"),
ErrorKind::Unknown, ErrorKind::Unknown,
)) ));
} }
})) }))
} else { } else {
@@ -148,7 +150,7 @@ async fn setup_or_init(
let init_phases = InitPhases::new(&handle); let init_phases = InitPhases::new(&handle);
let rpc_ctx_phases = InitRpcContextPhases::new(&handle); let rpc_ctx_phases = InitRpcContextPhases::new(&handle);
server.serve_init(init_ctx); server.serve_ui_for(init_ctx);
async { async {
disk_phase.start(); disk_phase.start();
@@ -183,7 +185,7 @@ async fn setup_or_init(
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1)); let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1));
reboot_phase.start(); reboot_phase.start();
return Ok(Err(Shutdown { return Ok(Err(Shutdown {
export_args: Some((disk_guid, Path::new(DATA_DIR).to_owned())), disk_guid: Some(disk_guid),
restart: true, restart: true,
})); }));
} }
@@ -246,7 +248,7 @@ pub async fn main(
e, e,
)?; )?;
server.serve_diagnostic(ctx.clone()); server.serve_ui_for(ctx.clone());
let shutdown = ctx.shutdown.subscribe().recv().await.unwrap(); let shutdown = ctx.shutdown.subscribe().recv().await.unwrap();

View File

@@ -12,8 +12,9 @@ use tracing::instrument;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases; use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, RpcContext}; use crate::context::{DiagnosticContext, InitContext, RpcContext};
use crate::net::network_interface::SelfContainedNetworkInterfaceListener; use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener};
use crate::net::web_server::{Acceptor, UpgradableListener, WebServer}; use crate::net::static_server::refresher;
use crate::net::web_server::{Acceptor, WebServer};
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::system::launch_metrics_task; use crate::system::launch_metrics_task;
use crate::util::io::append_file; use crate::util::io::append_file;
@@ -38,7 +39,7 @@ async fn inner_main(
}; };
tokio::fs::write("/run/startos/initialized", "").await?; tokio::fs::write("/run/startos/initialized", "").await?;
server.serve_main(ctx.clone()); server.serve_ui_for(ctx.clone());
LOGGER.set_logfile(None); LOGGER.set_logfile(None);
handle.complete(); handle.complete();
@@ -47,7 +48,7 @@ async fn inner_main(
let init_ctx = InitContext::init(config).await?; let init_ctx = InitContext::init(config).await?;
let handle = init_ctx.progress.clone(); let handle = init_ctx.progress.clone();
let rpc_ctx_phases = InitRpcContextPhases::new(&handle); let rpc_ctx_phases = InitRpcContextPhases::new(&handle);
server.serve_init(init_ctx); server.serve_ui_for(init_ctx);
let ctx = RpcContext::init( let ctx = RpcContext::init(
&server.acceptor_setter(), &server.acceptor_setter(),
@@ -63,14 +64,14 @@ async fn inner_main(
) )
.await?; .await?;
server.serve_main(ctx.clone()); server.serve_ui_for(ctx.clone());
handle.complete(); handle.complete();
ctx ctx
}; };
let (rpc_ctx, shutdown) = async { let (rpc_ctx, shutdown) = async {
crate::hostname::sync_hostname(&rpc_ctx.account.read().await.hostname).await?; crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.clone())).await?;
let mut shutdown_recv = rpc_ctx.shutdown.subscribe(); let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
@@ -132,8 +133,6 @@ async fn inner_main(
.await?; .await?;
rpc_ctx.shutdown().await?; rpc_ctx.shutdown().await?;
tracing::info!("RPC Context is dropped");
Ok(shutdown) Ok(shutdown)
} }
@@ -144,14 +143,15 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
let res = { let res = {
let rt = tokio::runtime::Builder::new_multi_thread() let rt = tokio::runtime::Builder::new_multi_thread()
.worker_threads(max(4, num_cpus::get())) .worker_threads(max(1, num_cpus::get()))
.enable_all() .enable_all()
.build() .build()
.expect("failed to initialize runtime"); .expect("failed to initialize runtime");
let res = rt.block_on(async { let res = rt.block_on(async {
let mut server = WebServer::new(Acceptor::bind_upgradable( let mut server = WebServer::new(
SelfContainedNetworkInterfaceListener::bind(80), Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)),
)); refresher(),
);
match inner_main(&mut server, &config).await { match inner_main(&mut server, &config).await {
Ok(a) => { Ok(a) => {
server.shutdown().await; server.shutdown().await;
@@ -179,7 +179,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
e, e,
)?; )?;
server.serve_diagnostic(ctx.clone()); server.serve_ui_for(ctx.clone());
let mut shutdown = ctx.shutdown.subscribe(); let mut shutdown = ctx.shutdown.subscribe();

View File

@@ -0,0 +1,200 @@
use std::ffi::OsString;
use std::net::SocketAddr;
use std::sync::Arc;
use std::time::Duration;
use clap::Parser;
use futures::FutureExt;
use helpers::NonDetachingJoinHandle;
use rpc_toolkit::CliApp;
use tokio::signal::unix::signal;
use tracing::instrument;
use visit_rs::Visit;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::net::gateway::{Bind, BindTcp};
use crate::net::tls::TlsListener;
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
use crate::prelude::*;
use crate::tunnel::context::{TunnelConfig, TunnelContext};
use crate::tunnel::tunnel_router;
use crate::tunnel::web::TunnelCertHandler;
use crate::util::logger::LOGGER;
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
enum WebserverListener {
Http,
Https(SocketAddr),
}
impl<V: MetadataVisitor> Visit<V> for WebserverListener {
fn visit(&self, visitor: &mut V) -> <V as visit_rs::Visitor>::Result {
visitor.visit(self)
}
}
#[instrument(skip_all)]
async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
let server = async {
let ctx = TunnelContext::init(config).await?;
let listen = ctx.listen;
let server = WebServer::new(
Acceptor::bind_map_dyn([(WebserverListener::Http, listen)]).await?,
tunnel_router(ctx.clone()),
);
let acceptor_setter = server.acceptor_setter();
let https_db = ctx.db.clone();
let https_thread: NonDetachingJoinHandle<()> = tokio::spawn(async move {
let mut sub = https_db.subscribe("/webserver".parse().unwrap()).await;
while {
while let Err(e) = async {
let webserver = https_db.peek().await.into_webserver();
if webserver.as_enabled().de()? {
let addr = webserver.as_listen().de()?.or_not_found("listen address")?;
acceptor_setter.send_if_modified(|a| {
let key = WebserverListener::Https(addr);
if !a.contains_key(&key) {
match (|| {
Ok::<_, Error>(TlsListener::new(
BindTcp.bind(addr)?,
TunnelCertHandler {
db: https_db.clone(),
crypto_provider: Arc::new(tokio_rustls::rustls::crypto::ring::default_provider()),
},
))
})() {
Ok(l) => {
a.retain(|k, _| *k == WebserverListener::Http);
a.insert(key, l.into_dyn());
true
}
Err(e) => {
tracing::error!("error adding ssl listener: {e}");
tracing::debug!("{e:?}");
false
}
}
} else {
false
}
});
} else {
acceptor_setter.send_if_modified(|a| {
let before = a.len();
a.retain(|k, _| *k == WebserverListener::Http);
a.len() != before
});
}
Ok::<_, Error>(())
}
.await
{
tracing::error!("error updating webserver bind: {e}");
tracing::debug!("{e:?}");
tokio::time::sleep(Duration::from_secs(5)).await;
}
sub.recv().await.is_some()
} {}
})
.into();
let mut shutdown_recv = ctx.shutdown.subscribe();
let sig_handler_ctx = ctx;
let sig_handler: NonDetachingJoinHandle<()> = tokio::spawn(async move {
use tokio::signal::unix::SignalKind;
futures::future::select_all(
[
SignalKind::interrupt(),
SignalKind::quit(),
SignalKind::terminate(),
]
.iter()
.map(|s| {
async move {
signal(*s)
.unwrap_or_else(|_| panic!("register {:?} handler", s))
.recv()
.await
}
.boxed()
}),
)
.await;
sig_handler_ctx
.shutdown
.send(())
.map_err(|_| ())
.expect("send shutdown signal");
})
.into();
shutdown_recv
.recv()
.await
.with_kind(crate::ErrorKind::Unknown)?;
sig_handler.wait_for_abort().await.with_kind(ErrorKind::Unknown)?;
https_thread.wait_for_abort().await.with_kind(ErrorKind::Unknown)?;
Ok::<_, Error>(server)
}
.await?;
server.shutdown().await;
Ok(())
}
pub fn main(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
let config = TunnelConfig::parse_from(args).load().unwrap();
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("failed to initialize runtime");
rt.block_on(inner_main(&config))
};
match res {
Ok(()) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
}
}
pub fn cli(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::tunnel::api::tunnel_api(),
)
.run(args)
{
match e.data {
Some(serde_json::Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(serde_json::Value::Object(o)) => {
if let Some(serde_json::Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(serde_json::Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
}

View File

@@ -1,27 +1,33 @@
use std::fs::File; use std::fs::File;
use std::io::BufReader; use std::io::BufReader;
use std::net::SocketAddr;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use cookie_store::{CookieStore, RawCookie}; use cookie::{Cookie, Expiration, SameSite};
use cookie_store::CookieStore;
use http::HeaderMap;
use imbl_value::InternedString;
use josekit::jwk::Jwk; use josekit::jwk::Jwk;
use once_cell::sync::OnceCell; use once_cell::sync::OnceCell;
use reqwest::Proxy; use reqwest::Proxy;
use reqwest_cookie_store::CookieStoreMutex; use reqwest_cookie_store::CookieStoreMutex;
use rpc_toolkit::reqwest::{Client, Url}; use rpc_toolkit::reqwest::{Client, Url};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{call_remote_http, CallRemote, Context, Empty}; use rpc_toolkit::{CallRemote, Context, Empty};
use tokio::net::TcpStream; use tokio::net::TcpStream;
use tokio::runtime::Runtime; use tokio::runtime::Runtime;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream}; use tokio_tungstenite::{MaybeTlsStream, WebSocketStream};
use tracing::instrument; use tracing::instrument;
use super::setup::CURRENT_SECRET; use super::setup::CURRENT_SECRET;
use crate::context::config::{local_config_path, ClientConfig}; use crate::context::config::{ClientConfig, local_config_path};
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext}; use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext};
use crate::middleware::auth::LOCAL_AUTH_COOKIE_PATH; use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path};
use crate::middleware::auth::AuthContext;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::Guid; use crate::rpc_continuations::Guid;
use crate::util::io::read_file_to_string;
#[derive(Debug)] #[derive(Debug)]
pub struct CliContextSeed { pub struct CliContextSeed {
@@ -29,6 +35,10 @@ pub struct CliContextSeed {
pub base_url: Url, pub base_url: Url,
pub rpc_url: Url, pub rpc_url: Url,
pub registry_url: Option<Url>, pub registry_url: Option<Url>,
pub registry_hostname: Option<InternedString>,
pub registry_listen: Option<SocketAddr>,
pub tunnel_addr: Option<SocketAddr>,
pub tunnel_listen: Option<SocketAddr>,
pub client: Client, pub client: Client,
pub cookie_store: Arc<CookieStoreMutex>, pub cookie_store: Arc<CookieStoreMutex>,
pub cookie_path: PathBuf, pub cookie_path: PathBuf,
@@ -55,9 +65,8 @@ impl Drop for CliContextSeed {
true, true,
) )
.unwrap(); .unwrap();
let mut store = self.cookie_store.lock().unwrap(); let store = self.cookie_store.lock().unwrap();
store.remove("localhost", "", "local"); cookie_store::serde::json::save(&store, &mut *writer).unwrap();
store.save_json(&mut *writer).unwrap();
writer.sync_all().unwrap(); writer.sync_all().unwrap();
std::fs::rename(tmp, &self.cookie_path).unwrap(); std::fs::rename(tmp, &self.cookie_path).unwrap();
} }
@@ -85,26 +94,14 @@ impl CliContext {
.unwrap_or(Path::new("/")) .unwrap_or(Path::new("/"))
.join(".cookies.json") .join(".cookies.json")
}); });
let cookie_store = Arc::new(CookieStoreMutex::new({ let cookie_store = Arc::new(CookieStoreMutex::new(if cookie_path.exists() {
let mut store = if cookie_path.exists() { cookie_store::serde::json::load(BufReader::new(
CookieStore::load_json(BufReader::new( File::open(&cookie_path)
File::open(&cookie_path) .with_ctx(|_| (ErrorKind::Filesystem, cookie_path.display()))?,
.with_ctx(|_| (ErrorKind::Filesystem, cookie_path.display()))?, ))
)) .unwrap_or_default()
.map_err(|e| eyre!("{}", e)) } else {
.with_kind(crate::ErrorKind::Deserialization)? CookieStore::default()
} else {
CookieStore::default()
};
if let Ok(local) = std::fs::read_to_string(LOCAL_AUTH_COOKIE_PATH) {
store
.insert_raw(
&RawCookie::new("local", local),
&"http://localhost".parse()?,
)
.with_kind(crate::ErrorKind::Network)?;
}
store
})); }));
Ok(CliContext(Arc::new(CliContextSeed { Ok(CliContext(Arc::new(CliContextSeed {
@@ -129,9 +126,17 @@ impl CliContext {
Ok::<_, Error>(registry) Ok::<_, Error>(registry)
}) })
.transpose()?, .transpose()?,
registry_hostname: config.registry_hostname,
registry_listen: config.registry_listen,
tunnel_addr: config.tunnel,
tunnel_listen: config.tunnel_listen,
client: { client: {
let mut builder = Client::builder().cookie_provider(cookie_store.clone()); let mut builder = Client::builder().cookie_provider(cookie_store.clone());
if let Some(proxy) = config.proxy { if let Some(proxy) = config.proxy.or_else(|| {
config
.socks_listen
.and_then(|socks| format!("socks5h://{socks}").parse::<Url>().log_err())
}) {
builder = builder =
builder.proxy(Proxy::all(proxy).with_kind(crate::ErrorKind::ParseUrl)?) builder.proxy(Proxy::all(proxy).with_kind(crate::ErrorKind::ParseUrl)?)
} }
@@ -139,14 +144,9 @@ impl CliContext {
}, },
cookie_store, cookie_store,
cookie_path, cookie_path,
developer_key_path: config.developer_key_path.unwrap_or_else(|| { developer_key_path: config
local_config_path() .developer_key_path
.as_deref() .unwrap_or_else(default_developer_key_path),
.unwrap_or_else(|| Path::new(super::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join("developer.key.pem")
}),
developer_key: OnceCell::new(), developer_key: OnceCell::new(),
}))) })))
} }
@@ -155,20 +155,26 @@ impl CliContext {
#[instrument(skip_all)] #[instrument(skip_all)]
pub fn developer_key(&self) -> Result<&ed25519_dalek::SigningKey, Error> { pub fn developer_key(&self) -> Result<&ed25519_dalek::SigningKey, Error> {
self.developer_key.get_or_try_init(|| { self.developer_key.get_or_try_init(|| {
if !self.developer_key_path.exists() { for path in [Path::new(OS_DEVELOPER_KEY_PATH), &self.developer_key_path] {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `start-cli init` before running this command."), crate::ErrorKind::Uninitialized)); if !path.exists() {
} continue;
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem( }
&std::fs::read_to_string(&self.developer_key_path)?, let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
) &std::fs::read_to_string(path)?,
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
) )
})?; .with_kind(crate::ErrorKind::Pem)?;
Ok(secret.into()) let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
)
})?;
return Ok(secret.into())
}
Err(Error::new(
eyre!("Developer Key does not exist! Please run `start-cli init-key` before running this command."),
crate::ErrorKind::Uninitialized
))
}) })
} }
@@ -185,7 +191,7 @@ impl CliContext {
eyre!("Cannot parse scheme from base URL"), eyre!("Cannot parse scheme from base URL"),
crate::ErrorKind::ParseUrl, crate::ErrorKind::ParseUrl,
) )
.into()) .into());
} }
}; };
url.set_scheme(ws_scheme) url.set_scheme(ws_scheme)
@@ -228,23 +234,28 @@ impl CliContext {
&self, &self,
method: &str, method: &str,
params: Value, params: Value,
) -> Result<Value, RpcError> ) -> Result<Value, Error>
where where
Self: CallRemote<RemoteContext>, Self: CallRemote<RemoteContext>,
{ {
<Self as CallRemote<RemoteContext, Empty>>::call_remote(&self, method, params, Empty {}) <Self as CallRemote<RemoteContext, Empty>>::call_remote(&self, method, params, Empty {})
.await .await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
} }
pub async fn call_remote_with<RemoteContext, T>( pub async fn call_remote_with<RemoteContext, T>(
&self, &self,
method: &str, method: &str,
params: Value, params: Value,
extra: T, extra: T,
) -> Result<Value, RpcError> ) -> Result<Value, Error>
where where
Self: CallRemote<RemoteContext, T>, Self: CallRemote<RemoteContext, T>,
{ {
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await <Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra)
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
} }
} }
impl AsRef<Jwk> for CliContext { impl AsRef<Jwk> for CliContext {
@@ -274,40 +285,88 @@ impl Context for CliContext {
) )
} }
} }
impl AsRef<Client> for CliContext {
fn as_ref(&self) -> &Client {
&self.client
}
}
impl CallRemote<RpcContext> for CliContext { impl CallRemote<RpcContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await if let Ok(local) = read_file_to_string(RpcContext::LOCAL_AUTH_COOKIE_PATH).await {
self.cookie_store
.lock()
.unwrap()
.insert_raw(
&Cookie::build(("local", local))
.domain("localhost")
.expires(Expiration::Session)
.same_site(SameSite::Strict)
.build(),
&"http://localhost".parse()?,
)
.with_kind(crate::ErrorKind::Network)?;
}
crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
} }
} }
impl CallRemote<DiagnosticContext> for CliContext { impl CallRemote<DiagnosticContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
} }
} }
impl CallRemote<InitContext> for CliContext { impl CallRemote<InitContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
} }
} }
impl CallRemote<SetupContext> for CliContext { impl CallRemote<SetupContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
} }
} }
impl CallRemote<InstallContext> for CliContext { impl CallRemote<InstallContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> { async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
} }
} }
#[test]
fn test() {
let ctx = CliContext::init(ClientConfig::default()).unwrap();
ctx.runtime().unwrap().block_on(async {
reqwest::Client::new()
.get("http://example.com")
.send()
.await
.unwrap();
});
}

View File

@@ -3,15 +3,16 @@ use std::net::SocketAddr;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use clap::Parser; use clap::Parser;
use imbl_value::InternedString;
use reqwest::Url; use reqwest::Url;
use serde::de::DeserializeOwned; use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::MAIN_DATA;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::prelude::*; use crate::prelude::*;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::version::VersionT; use crate::version::VersionT;
use crate::MAIN_DATA;
pub const DEVICE_CONFIG_PATH: &str = "/media/startos/config/config.yaml"; // "/media/startos/config/config.yaml"; pub const DEVICE_CONFIG_PATH: &str = "/media/startos/config/config.yaml"; // "/media/startos/config/config.yaml";
pub const CONFIG_PATH: &str = "/etc/startos/config.yaml"; pub const CONFIG_PATH: &str = "/etc/startos/config.yaml";
@@ -55,7 +56,6 @@ pub trait ContextConfig: DeserializeOwned + Default {
#[derive(Debug, Default, Deserialize, Serialize, Parser)] #[derive(Debug, Default, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")] #[command(rename_all = "kebab-case")]
#[command(name = "start-cli")]
#[command(version = crate::version::Current::default().semver().to_string())] #[command(version = crate::version::Current::default().semver().to_string())]
pub struct ClientConfig { pub struct ClientConfig {
#[arg(short = 'c', long)] #[arg(short = 'c', long)]
@@ -64,8 +64,18 @@ pub struct ClientConfig {
pub host: Option<Url>, pub host: Option<Url>,
#[arg(short = 'r', long)] #[arg(short = 'r', long)]
pub registry: Option<Url>, pub registry: Option<Url>,
#[arg(long)]
pub registry_hostname: Option<InternedString>,
#[arg(skip)]
pub registry_listen: Option<SocketAddr>,
#[arg(short = 't', long)]
pub tunnel: Option<SocketAddr>,
#[arg(skip)]
pub tunnel_listen: Option<SocketAddr>,
#[arg(short = 'p', long)] #[arg(short = 'p', long)]
pub proxy: Option<Url>, pub proxy: Option<Url>,
#[arg(skip)]
pub socks_listen: Option<SocketAddr>,
#[arg(long)] #[arg(long)]
pub cookie_path: Option<PathBuf>, pub cookie_path: Option<PathBuf>,
#[arg(long)] #[arg(long)]
@@ -78,6 +88,8 @@ impl ContextConfig for ClientConfig {
fn merge_with(&mut self, other: Self) { fn merge_with(&mut self, other: Self) {
self.host = self.host.take().or(other.host); self.host = self.host.take().or(other.host);
self.registry = self.registry.take().or(other.registry); self.registry = self.registry.take().or(other.registry);
self.registry_hostname = self.registry_hostname.take().or(other.registry_hostname);
self.tunnel = self.tunnel.take().or(other.tunnel);
self.proxy = self.proxy.take().or(other.proxy); self.proxy = self.proxy.take().or(other.proxy);
self.cookie_path = self.cookie_path.take().or(other.cookie_path); self.cookie_path = self.cookie_path.take().or(other.cookie_path);
self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path); self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path);
@@ -104,15 +116,15 @@ pub struct ServerConfig {
#[arg(skip)] #[arg(skip)]
pub os_partitions: Option<OsPartitionInfo>, pub os_partitions: Option<OsPartitionInfo>,
#[arg(long)] #[arg(long)]
pub tor_control: Option<SocketAddr>, pub socks_listen: Option<SocketAddr>,
#[arg(long)]
pub tor_socks: Option<SocketAddr>,
#[arg(long)] #[arg(long)]
pub revision_cache_size: Option<usize>, pub revision_cache_size: Option<usize>,
#[arg(long)] #[arg(long)]
pub disable_encryption: Option<bool>, pub disable_encryption: Option<bool>,
#[arg(long)] #[arg(long)]
pub multi_arch_s9pks: Option<bool>, pub multi_arch_s9pks: Option<bool>,
#[arg(long)]
pub developer_key_path: Option<PathBuf>,
} }
impl ContextConfig for ServerConfig { impl ContextConfig for ServerConfig {
fn next(&mut self) -> Option<PathBuf> { fn next(&mut self) -> Option<PathBuf> {
@@ -121,14 +133,14 @@ impl ContextConfig for ServerConfig {
fn merge_with(&mut self, other: Self) { fn merge_with(&mut self, other: Self) {
self.ethernet_interface = self.ethernet_interface.take().or(other.ethernet_interface); self.ethernet_interface = self.ethernet_interface.take().or(other.ethernet_interface);
self.os_partitions = self.os_partitions.take().or(other.os_partitions); self.os_partitions = self.os_partitions.take().or(other.os_partitions);
self.tor_control = self.tor_control.take().or(other.tor_control); self.socks_listen = self.socks_listen.take().or(other.socks_listen);
self.tor_socks = self.tor_socks.take().or(other.tor_socks);
self.revision_cache_size = self self.revision_cache_size = self
.revision_cache_size .revision_cache_size
.take() .take()
.or(other.revision_cache_size); .or(other.revision_cache_size);
self.disable_encryption = self.disable_encryption.take().or(other.disable_encryption); self.disable_encryption = self.disable_encryption.take().or(other.disable_encryption);
self.multi_arch_s9pks = self.multi_arch_s9pks.take().or(other.multi_arch_s9pks); self.multi_arch_s9pks = self.multi_arch_s9pks.take().or(other.multi_arch_s9pks);
self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path);
} }
} }

View File

@@ -1,15 +1,15 @@
use std::ops::Deref; use std::ops::Deref;
use std::sync::Arc; use std::sync::Arc;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::Context; use rpc_toolkit::Context;
use rpc_toolkit::yajrc::RpcError;
use tokio::sync::broadcast::Sender; use tokio::sync::broadcast::Sender;
use tracing::instrument; use tracing::instrument;
use crate::Error;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::rpc_continuations::RpcContinuations; use crate::rpc_continuations::RpcContinuations;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::Error;
pub struct DiagnosticContextSeed { pub struct DiagnosticContextSeed {
pub shutdown: Sender<Shutdown>, pub shutdown: Sender<Shutdown>,

View File

@@ -6,10 +6,10 @@ use tokio::sync::broadcast::Sender;
use tokio::sync::watch; use tokio::sync::watch;
use tracing::instrument; use tracing::instrument;
use crate::Error;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::progress::FullProgressTracker; use crate::progress::FullProgressTracker;
use crate::rpc_continuations::RpcContinuations; use crate::rpc_continuations::RpcContinuations;
use crate::Error;
pub struct InitContextSeed { pub struct InitContextSeed {
pub config: ServerConfig, pub config: ServerConfig,
@@ -25,10 +25,12 @@ impl InitContext {
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn init(cfg: &ServerConfig) -> Result<Self, Error> { pub async fn init(cfg: &ServerConfig) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);
let mut progress = FullProgressTracker::new();
progress.enable_logging(true);
Ok(Self(Arc::new(InitContextSeed { Ok(Self(Arc::new(InitContextSeed {
config: cfg.clone(), config: cfg.clone(),
error: watch::channel(None).0, error: watch::channel(None).0,
progress: FullProgressTracker::new(), progress,
shutdown, shutdown,
rpc_continuations: RpcContinuations::new(), rpc_continuations: RpcContinuations::new(),
}))) })))

View File

@@ -5,9 +5,9 @@ use rpc_toolkit::Context;
use tokio::sync::broadcast::Sender; use tokio::sync::broadcast::Sender;
use tracing::instrument; use tracing::instrument;
use crate::Error;
use crate::net::utils::find_eth_iface; use crate::net::utils::find_eth_iface;
use crate::rpc_continuations::RpcContinuations; use crate::rpc_continuations::RpcContinuations;
use crate::Error;
pub struct InstallContextSeed { pub struct InstallContextSeed {
pub ethernet_interface: String, pub ethernet_interface: String,

View File

@@ -1,11 +1,10 @@
use std::collections::{BTreeMap, BTreeSet}; use std::collections::{BTreeMap, BTreeSet};
use std::ffi::OsStr; use std::ffi::OsStr;
use std::future::Future; use std::future::Future;
use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4};
use std::ops::Deref; use std::ops::Deref;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc; use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use std::time::Duration; use std::time::Duration;
use chrono::{TimeDelta, Utc}; use chrono::{TimeDelta, Utc};
@@ -18,35 +17,37 @@ use models::{ActionId, PackageId};
use reqwest::{Client, Proxy}; use reqwest::{Client, Proxy};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{CallRemote, Context, Empty}; use rpc_toolkit::{CallRemote, Context, Empty};
use tokio::sync::{broadcast, oneshot, watch, Mutex, RwLock}; use tokio::sync::{RwLock, broadcast, oneshot, watch};
use tokio::time::Instant; use tokio::time::Instant;
use tracing::instrument; use tracing::instrument;
use super::setup::CURRENT_SECRET; use super::setup::CURRENT_SECRET;
use crate::DATA_DIR;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::auth::Sessions; use crate::auth::Sessions;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::db::model::package::TaskSeverity;
use crate::db::model::Database; use crate::db::model::Database;
use crate::db::model::package::TaskSeverity;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::init::{check_time_is_synchronized, InitResult}; use crate::init::{InitResult, check_time_is_synchronized};
use crate::install::PKG_ARCHIVE_DIR; use crate::install::PKG_ARCHIVE_DIR;
use crate::lxc::{ContainerId, LxcContainer, LxcManager}; use crate::lxc::LxcManager;
use crate::net::gateway::UpgradableListener;
use crate::net::net_controller::{NetController, NetService}; use crate::net::net_controller::{NetController, NetService};
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
use crate::net::utils::{find_eth_iface, find_wifi_iface}; use crate::net::utils::{find_eth_iface, find_wifi_iface};
use crate::net::web_server::{UpgradableListener, WebServerAcceptorSetter}; use crate::net::web_server::WebServerAcceptorSetter;
use crate::net::wifi::WpaCli; use crate::net::wifi::WpaCli;
use crate::prelude::*; use crate::prelude::*;
use crate::progress::{FullProgressTracker, PhaseProgressTrackerHandle}; use crate::progress::{FullProgressTracker, PhaseProgressTrackerHandle};
use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations}; use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations};
use crate::service::ServiceMap;
use crate::service::action::update_tasks; use crate::service::action::update_tasks;
use crate::service::effects::callbacks::ServiceCallbacks; use crate::service::effects::callbacks::ServiceCallbacks;
use crate::service::ServiceMap;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::util::io::delete_file; use crate::util::io::delete_file;
use crate::util::lshw::LshwDevice; use crate::util::lshw::LshwDevice;
use crate::util::sync::{SyncMutex, Watch}; use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
use crate::DATA_DIR;
pub struct RpcContextSeed { pub struct RpcContextSeed {
is_closed: AtomicBool, is_closed: AtomicBool,
@@ -57,7 +58,7 @@ pub struct RpcContextSeed {
pub ephemeral_sessions: SyncMutex<Sessions>, pub ephemeral_sessions: SyncMutex<Sessions>,
pub db: TypedPatchDb<Database>, pub db: TypedPatchDb<Database>,
pub sync_db: watch::Sender<u64>, pub sync_db: watch::Sender<u64>,
pub account: RwLock<AccountInfo>, pub account: SyncRwLock<AccountInfo>,
pub net_controller: Arc<NetController>, pub net_controller: Arc<NetController>,
pub os_net_service: NetService, pub os_net_service: NetService,
pub s9pk_arch: Option<&'static str>, pub s9pk_arch: Option<&'static str>,
@@ -65,7 +66,6 @@ pub struct RpcContextSeed {
pub cancellable_installs: SyncMutex<BTreeMap<PackageId, oneshot::Sender<()>>>, pub cancellable_installs: SyncMutex<BTreeMap<PackageId, oneshot::Sender<()>>>,
pub metrics_cache: Watch<Option<crate::system::Metrics>>, pub metrics_cache: Watch<Option<crate::system::Metrics>>,
pub shutdown: broadcast::Sender<Option<Shutdown>>, pub shutdown: broadcast::Sender<Option<Shutdown>>,
pub tor_socks: SocketAddr,
pub lxc_manager: Arc<LxcManager>, pub lxc_manager: Arc<LxcManager>,
pub open_authed_continuations: OpenAuthedContinuations<Option<InternedString>>, pub open_authed_continuations: OpenAuthedContinuations<Option<InternedString>>,
pub rpc_continuations: RpcContinuations, pub rpc_continuations: RpcContinuations,
@@ -75,12 +75,11 @@ pub struct RpcContextSeed {
pub client: Client, pub client: Client,
pub start_time: Instant, pub start_time: Instant,
pub crons: SyncMutex<BTreeMap<Guid, NonDetachingJoinHandle<()>>>, pub crons: SyncMutex<BTreeMap<Guid, NonDetachingJoinHandle<()>>>,
// #[cfg(feature = "dev")]
pub dev: Dev,
} }
impl Drop for RpcContextSeed {
pub struct Dev { fn drop(&mut self) {
pub lxc: Mutex<BTreeMap<ContainerId, LxcContainer>>, tracing::info!("RpcContext is dropped");
}
} }
pub struct Hardware { pub struct Hardware {
@@ -138,10 +137,7 @@ impl RpcContext {
run_migrations, run_migrations,
}: InitRpcContextPhases, }: InitRpcContextPhases,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let tor_proxy = config.tor_socks.unwrap_or(SocketAddr::V4(SocketAddrV4::new( let socks_proxy = config.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN);
Ipv4Addr::new(127, 0, 0, 1),
9050,
)));
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);
load_db.start(); load_db.start();
@@ -163,18 +159,9 @@ impl RpcContext {
{ {
(net_ctrl, os_net_service) (net_ctrl, os_net_service)
} else { } else {
let net_ctrl = Arc::new( let net_ctrl =
NetController::init( Arc::new(NetController::init(db.clone(), &account.hostname, socks_proxy).await?);
db.clone(), webserver.try_upgrade(|a| net_ctrl.net_iface.watcher.upgrade_listener(a))?;
config
.tor_control
.unwrap_or(SocketAddr::from(([127, 0, 0, 1], 9051))),
tor_proxy,
&account.hostname,
)
.await?,
);
webserver.try_upgrade(|a| net_ctrl.net_iface.upgrade_listener(a))?;
let os_net_service = net_ctrl.os_bindings().await?; let os_net_service = net_ctrl.os_bindings().await?;
(net_ctrl, os_net_service) (net_ctrl, os_net_service)
}; };
@@ -183,7 +170,7 @@ impl RpcContext {
let services = ServiceMap::default(); let services = ServiceMap::default();
let metrics_cache = Watch::<Option<crate::system::Metrics>>::new(None); let metrics_cache = Watch::<Option<crate::system::Metrics>>::new(None);
let tor_proxy_url = format!("socks5h://{tor_proxy}"); let socks_proxy_url = format!("socks5h://{socks_proxy}");
let crons = SyncMutex::new(BTreeMap::new()); let crons = SyncMutex::new(BTreeMap::new());
@@ -238,7 +225,7 @@ impl RpcContext {
ephemeral_sessions: SyncMutex::new(Sessions::new()), ephemeral_sessions: SyncMutex::new(Sessions::new()),
sync_db: watch::Sender::new(db.sequence().await), sync_db: watch::Sender::new(db.sequence().await),
db, db,
account: RwLock::new(account), account: SyncRwLock::new(account),
callbacks: net_controller.callbacks.clone(), callbacks: net_controller.callbacks.clone(),
net_controller, net_controller,
os_net_service, os_net_service,
@@ -251,7 +238,6 @@ impl RpcContext {
cancellable_installs: SyncMutex::new(BTreeMap::new()), cancellable_installs: SyncMutex::new(BTreeMap::new()),
metrics_cache, metrics_cache,
shutdown, shutdown,
tor_socks: tor_proxy,
lxc_manager: Arc::new(LxcManager::new()), lxc_manager: Arc::new(LxcManager::new()),
open_authed_continuations: OpenAuthedContinuations::new(), open_authed_continuations: OpenAuthedContinuations::new(),
rpc_continuations: RpcContinuations::new(), rpc_continuations: RpcContinuations::new(),
@@ -267,21 +253,11 @@ impl RpcContext {
})?, })?,
), ),
client: Client::builder() client: Client::builder()
.proxy(Proxy::custom(move |url| { .proxy(Proxy::all(socks_proxy_url)?)
if url.host_str().map_or(false, |h| h.ends_with(".onion")) {
Some(tor_proxy_url.clone())
} else {
None
}
}))
.build() .build()
.with_kind(crate::ErrorKind::ParseUrl)?, .with_kind(crate::ErrorKind::ParseUrl)?,
start_time: Instant::now(), start_time: Instant::now(),
crons, crons,
// #[cfg(feature = "dev")]
dev: Dev {
lxc: Mutex::new(BTreeMap::new()),
},
}); });
let res = Self(seed.clone()); let res = Self(seed.clone());
@@ -298,7 +274,7 @@ impl RpcContext {
self.crons.mutate(|c| std::mem::take(c)); self.crons.mutate(|c| std::mem::take(c));
self.services.shutdown_all().await?; self.services.shutdown_all().await?;
self.is_closed.store(true, Ordering::SeqCst); self.is_closed.store(true, Ordering::SeqCst);
tracing::info!("RPC Context is shutdown"); tracing::info!("RpcContext is shutdown");
Ok(()) Ok(())
} }
@@ -507,6 +483,11 @@ impl RpcContext {
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await <Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await
} }
} }
impl AsRef<Client> for RpcContext {
fn as_ref(&self) -> &Client {
&self.client
}
}
impl AsRef<Jwk> for RpcContext { impl AsRef<Jwk> for RpcContext {
fn as_ref(&self) -> &Jwk { fn as_ref(&self) -> &Jwk {
&CURRENT_SECRET &CURRENT_SECRET

View File

@@ -10,23 +10,25 @@ use josekit::jwk::Jwk;
use patch_db::PatchDb; use patch_db::PatchDb;
use rpc_toolkit::Context; use rpc_toolkit::Context;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::sync::broadcast::Sender;
use tokio::sync::OnceCell; use tokio::sync::OnceCell;
use tokio::sync::broadcast::Sender;
use tracing::instrument; use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use crate::MAIN_DATA;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::context::config::ServerConfig;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::context::config::ServerConfig;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::hostname::Hostname; use crate::hostname::Hostname;
use crate::net::web_server::{UpgradableListener, WebServer, WebServerAcceptorSetter}; use crate::net::gateway::UpgradableListener;
use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
use crate::prelude::*; use crate::prelude::*;
use crate::progress::FullProgressTracker; use crate::progress::FullProgressTracker;
use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations}; use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations};
use crate::setup::SetupProgress; use crate::setup::SetupProgress;
use crate::shutdown::Shutdown;
use crate::util::net::WebSocketExt; use crate::util::net::WebSocketExt;
use crate::MAIN_DATA;
lazy_static::lazy_static! { lazy_static::lazy_static! {
pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| { pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| {
@@ -54,7 +56,7 @@ impl TryFrom<&AccountInfo> for SetupResult {
tor_addresses: value tor_addresses: value
.tor_keys .tor_keys
.iter() .iter()
.map(|tor_key| format!("https://{}", tor_key.public().get_onion_address())) .map(|tor_key| format!("https://{}", tor_key.onion_address()))
.collect(), .collect(),
hostname: value.hostname.clone(), hostname: value.hostname.clone(),
lan_address: value.hostname.lan_address(), lan_address: value.hostname.lan_address(),
@@ -71,7 +73,8 @@ pub struct SetupContextSeed {
pub progress: FullProgressTracker, pub progress: FullProgressTracker,
pub task: OnceCell<NonDetachingJoinHandle<()>>, pub task: OnceCell<NonDetachingJoinHandle<()>>,
pub result: OnceCell<Result<(SetupResult, RpcContext), Error>>, pub result: OnceCell<Result<(SetupResult, RpcContext), Error>>,
pub shutdown: Sender<()>, pub disk_guid: OnceCell<Arc<String>>,
pub shutdown: Sender<Option<Shutdown>>,
pub rpc_continuations: RpcContinuations, pub rpc_continuations: RpcContinuations,
} }
@@ -84,6 +87,8 @@ impl SetupContext {
config: &ServerConfig, config: &ServerConfig,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);
let mut progress = FullProgressTracker::new();
progress.enable_logging(true);
Ok(Self(Arc::new(SetupContextSeed { Ok(Self(Arc::new(SetupContextSeed {
webserver: webserver.acceptor_setter(), webserver: webserver.acceptor_setter(),
config: config.clone(), config: config.clone(),
@@ -94,9 +99,10 @@ impl SetupContext {
) )
})?, })?,
disable_encryption: config.disable_encryption.unwrap_or(false), disable_encryption: config.disable_encryption.unwrap_or(false),
progress: FullProgressTracker::new(), progress,
task: OnceCell::new(), task: OnceCell::new(),
result: OnceCell::new(), result: OnceCell::new(),
disk_guid: OnceCell::new(),
shutdown, shutdown,
rpc_continuations: RpcContinuations::new(), rpc_continuations: RpcContinuations::new(),
}))) })))

View File

@@ -5,10 +5,10 @@ use serde::{Deserialize, Serialize};
use tracing::instrument; use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use crate::Error;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::Guid; use crate::rpc_continuations::Guid;
use crate::Error;
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]

View File

@@ -1,6 +1,7 @@
pub mod model; pub mod model;
pub mod prelude; pub mod prelude;
use std::panic::UnwindSafe;
use std::path::PathBuf; use std::path::PathBuf;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
@@ -12,7 +13,7 @@ use itertools::Itertools;
use patch_db::json_ptr::{JsonPointer, ROOT}; use patch_db::json_ptr::{JsonPointer, ROOT};
use patch_db::{DiffPatch, Dump, Revision}; use patch_db::{DiffPatch, Dump, Revision};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{from_fn_async, Context, HandlerArgs, HandlerExt, ParentHandler}; use rpc_toolkit::{Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::sync::mpsc::{self, UnboundedReceiver}; use tokio::sync::mpsc::{self, UnboundedReceiver};
use tokio::sync::watch; use tokio::sync::watch;
@@ -23,12 +24,32 @@ use crate::context::{CliContext, RpcContext};
use crate::prelude::*; use crate::prelude::*;
use crate::rpc_continuations::{Guid, RpcContinuation}; use crate::rpc_continuations::{Guid, RpcContinuation};
use crate::util::net::WebSocketExt; use crate::util::net::WebSocketExt;
use crate::util::serde::{apply_expr, HandlerExtSerde}; use crate::util::serde::{HandlerExtSerde, apply_expr};
lazy_static::lazy_static! { lazy_static::lazy_static! {
static ref PUBLIC: JsonPointer = "/public".parse().unwrap(); static ref PUBLIC: JsonPointer = "/public".parse().unwrap();
} }
pub trait DbAccess<T>: Sized {
fn access<'a>(db: &'a Model<Self>) -> &'a Model<T>;
}
pub trait DbAccessMut<T>: DbAccess<T> {
fn access_mut<'a>(db: &'a mut Model<Self>) -> &'a mut Model<T>;
}
pub trait DbAccessByKey<T>: Sized {
type Key<'a>;
fn access_by_key<'a>(db: &'a Model<Self>, key: Self::Key<'_>) -> Option<&'a Model<T>>;
}
pub trait DbAccessMutByKey<T>: DbAccessByKey<T> {
fn access_mut_by_key<'a>(
db: &'a mut Model<Self>,
key: Self::Key<'_>,
) -> Option<&'a mut Model<T>>;
}
pub fn db<C: Context>() -> ParentHandler<C> { pub fn db<C: Context>() -> ParentHandler<C> {
ParentHandler::new() ParentHandler::new()
.subcommand( .subcommand(
@@ -127,7 +148,7 @@ pub struct SubscribeParams {
#[ts(type = "string | null")] #[ts(type = "string | null")]
pointer: Option<JsonPointer>, pointer: Option<JsonPointer>,
#[ts(skip)] #[ts(skip)]
#[serde(rename = "__auth_session")] #[serde(rename = "__Auth_session")]
session: Option<InternedString>, session: Option<InternedString>,
} }

View File

@@ -12,6 +12,7 @@ use crate::net::forward::AvailablePorts;
use crate::net::keys::KeyStore; use crate::net::keys::KeyStore;
use crate::notifications::Notifications; use crate::notifications::Notifications;
use crate::prelude::*; use crate::prelude::*;
use crate::sign::AnyVerifyingKey;
use crate::ssh::SshKeys; use crate::ssh::SshKeys;
use crate::util::serde::Pem; use crate::util::serde::Pem;
@@ -33,6 +34,9 @@ impl Database {
private: Private { private: Private {
key_store: KeyStore::new(account)?, key_store: KeyStore::new(account)?,
password: account.password.clone(), password: account.password.clone(),
auth_pubkeys: [AnyVerifyingKey::Ed25519((&account.developer_key).into())]
.into_iter()
.collect(),
ssh_privkey: Pem(account.ssh_key.clone()), ssh_privkey: Pem(account.ssh_key.clone()),
ssh_pubkeys: SshKeys::new(), ssh_pubkeys: SshKeys::new(),
available_ports: AvailablePorts::new(), available_ports: AvailablePorts::new(),
@@ -40,7 +44,7 @@ impl Database {
notifications: Notifications::new(), notifications: Notifications::new(),
cifs: CifsTargets::new(), cifs: CifsTargets::new(),
package_stores: BTreeMap::new(), package_stores: BTreeMap::new(),
compat_s9pk_key: Pem(account.compat_s9pk_key.clone()), developer_key: Pem(account.developer_key.clone()),
}, // TODO }, // TODO
}) })
} }

View File

@@ -5,8 +5,8 @@ use chrono::{DateTime, Utc};
use exver::VersionRange; use exver::VersionRange;
use imbl_value::InternedString; use imbl_value::InternedString;
use models::{ActionId, DataUrl, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId}; use models::{ActionId, DataUrl, HealthCheckId, HostId, PackageId, ReplayId, ServiceInterfaceId};
use patch_db::json_ptr::JsonPointer;
use patch_db::HasModel; use patch_db::HasModel;
use patch_db::json_ptr::JsonPointer;
use reqwest::Url; use reqwest::Url;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
@@ -17,7 +17,7 @@ use crate::prelude::*;
use crate::progress::FullProgress; use crate::progress::FullProgress;
use crate::s9pk::manifest::Manifest; use crate::s9pk::manifest::Manifest;
use crate::status::MainStatus; use crate::status::MainStatus;
use crate::util::serde::{is_partial_of, Pem}; use crate::util::serde::{Pem, is_partial_of};
#[derive(Debug, Default, Deserialize, Serialize, TS)] #[derive(Debug, Default, Deserialize, Serialize, TS)]
#[ts(export)] #[ts(export)]
@@ -268,7 +268,7 @@ impl Model<PackageState> {
return Err(Error::new( return Err(Error::new(
eyre!("could not determine package state to get manifest"), eyre!("could not determine package state to get manifest"),
ErrorKind::Database, ErrorKind::Database,
)) ));
} }
}) })
} }
@@ -375,7 +375,6 @@ pub struct PackageDataEntry {
pub last_backup: Option<DateTime<Utc>>, pub last_backup: Option<DateTime<Utc>>,
pub current_dependencies: CurrentDependencies, pub current_dependencies: CurrentDependencies,
pub actions: BTreeMap<ActionId, ActionMetadata>, pub actions: BTreeMap<ActionId, ActionMetadata>,
#[ts(as = "BTreeMap::<String, TaskEntry>")]
pub tasks: BTreeMap<ReplayId, TaskEntry>, pub tasks: BTreeMap<ReplayId, TaskEntry>,
pub service_interfaces: BTreeMap<ServiceInterfaceId, ServiceInterface>, pub service_interfaces: BTreeMap<ServiceInterfaceId, ServiceInterface>,
pub hosts: Hosts, pub hosts: Hosts,

View File

@@ -1,4 +1,4 @@
use std::collections::BTreeMap; use std::collections::{BTreeMap, HashSet};
use models::PackageId; use models::PackageId;
use patch_db::{HasModel, Value}; use patch_db::{HasModel, Value};
@@ -10,6 +10,7 @@ use crate::net::forward::AvailablePorts;
use crate::net::keys::KeyStore; use crate::net::keys::KeyStore;
use crate::notifications::Notifications; use crate::notifications::Notifications;
use crate::prelude::*; use crate::prelude::*;
use crate::sign::AnyVerifyingKey;
use crate::ssh::SshKeys; use crate::ssh::SshKeys;
use crate::util::serde::Pem; use crate::util::serde::Pem;
@@ -19,8 +20,9 @@ use crate::util::serde::Pem;
pub struct Private { pub struct Private {
pub key_store: KeyStore, pub key_store: KeyStore,
pub password: String, // argon2 hash pub password: String, // argon2 hash
#[serde(default = "generate_compat_key")] pub auth_pubkeys: HashSet<AnyVerifyingKey>,
pub compat_s9pk_key: Pem<ed25519_dalek::SigningKey>, #[serde(default = "generate_developer_key")]
pub developer_key: Pem<ed25519_dalek::SigningKey>,
pub ssh_privkey: Pem<ssh_key::PrivateKey>, pub ssh_privkey: Pem<ssh_key::PrivateKey>,
pub ssh_pubkeys: SshKeys, pub ssh_pubkeys: SshKeys,
pub available_ports: AvailablePorts, pub available_ports: AvailablePorts,
@@ -31,7 +33,7 @@ pub struct Private {
pub package_stores: BTreeMap<PackageId, Value>, pub package_stores: BTreeMap<PackageId, Value>,
} }
pub fn generate_compat_key() -> Pem<ed25519_dalek::SigningKey> { pub fn generate_developer_key() -> Pem<ed25519_dalek::SigningKey> {
Pem(ed25519_dalek::SigningKey::generate( Pem(ed25519_dalek::SigningKey::generate(
&mut ssh_key::rand_core::OsRng::default(), &mut ssh_key::rand_core::OsRng::default(),
)) ))

View File

@@ -1,23 +1,27 @@
use std::collections::{BTreeMap, BTreeSet}; use std::collections::{BTreeMap, BTreeSet, VecDeque};
use std::net::{IpAddr, Ipv4Addr}; use std::net::{IpAddr, Ipv4Addr, SocketAddr};
use std::sync::Arc;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use exver::{Version, VersionRange}; use exver::{Version, VersionRange};
use imbl::{OrdMap, OrdSet};
use imbl_value::InternedString; use imbl_value::InternedString;
use ipnet::IpNet; use ipnet::IpNet;
use isocountry::CountryCode; use isocountry::CountryCode;
use itertools::Itertools; use itertools::Itertools;
use models::PackageId; use models::{GatewayId, PackageId};
use openssl::hash::MessageDigest; use openssl::hash::MessageDigest;
use patch_db::{HasModel, Value}; use patch_db::{HasModel, Value};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::db::DbAccessByKey;
use crate::db::model::Database;
use crate::db::model::package::AllPackageData; use crate::db::model::package::AllPackageData;
use crate::net::acme::AcmeProvider; use crate::net::acme::AcmeProvider;
use crate::net::host::binding::{AddSslOptions, BindInfo, BindOptions, NetInfo};
use crate::net::host::Host; use crate::net::host::Host;
use crate::net::host::binding::{AddSslOptions, BindInfo, BindOptions, NetInfo};
use crate::net::utils::ipv6_is_local; use crate::net::utils::ipv6_is_local;
use crate::net::vhost::AlpnInfo; use crate::net::vhost::AlpnInfo;
use crate::prelude::*; use crate::prelude::*;
@@ -71,26 +75,25 @@ impl Public {
net: NetInfo { net: NetInfo {
assigned_port: None, assigned_port: None,
assigned_ssl_port: Some(443), assigned_ssl_port: Some(443),
public: false, private_disabled: OrdSet::new(),
public_enabled: OrdSet::new(),
}, },
}, },
)] )]
.into_iter() .into_iter()
.collect(), .collect(),
onions: account onions: account.tor_keys.iter().map(|k| k.onion_address()).collect(),
.tor_keys public_domains: BTreeMap::new(),
.iter() private_domains: BTreeSet::new(),
.map(|k| k.public().get_onion_address())
.collect(),
domains: BTreeMap::new(),
hostname_info: BTreeMap::new(), hostname_info: BTreeMap::new(),
}, },
wifi: WifiInfo { wifi: WifiInfo {
enabled: true, enabled: true,
..Default::default() ..Default::default()
}, },
network_interfaces: BTreeMap::new(), gateways: OrdMap::new(),
acme: BTreeMap::new(), acme: BTreeMap::new(),
dns: Default::default(),
}, },
status_info: ServerStatus { status_info: ServerStatus {
backup_progress: None, backup_progress: None,
@@ -120,11 +123,20 @@ impl Public {
kiosk, kiosk,
}, },
package_data: AllPackageData::default(), package_data: AllPackageData::default(),
ui: serde_json::from_str(include_str!(concat!( ui: {
env!("CARGO_MANIFEST_DIR"), #[cfg(feature = "startd")]
"/../../web/patchdb-ui-seed.json" {
))) serde_json::from_str(include_str!(concat!(
.with_kind(ErrorKind::Deserialization)?, env!("CARGO_MANIFEST_DIR"),
"/../../web/patchdb-ui-seed.json"
)))
.with_kind(ErrorKind::Deserialization)?
}
#[cfg(not(feature = "startd"))]
{
Value::Null
}
},
}) })
} }
} }
@@ -186,11 +198,24 @@ pub struct ServerInfo {
pub struct NetworkInfo { pub struct NetworkInfo {
pub wifi: WifiInfo, pub wifi: WifiInfo,
pub host: Host, pub host: Host,
#[ts(as = "BTreeMap::<String, NetworkInterfaceInfo>")] #[ts(as = "BTreeMap::<GatewayId, NetworkInterfaceInfo>")]
#[serde(default)] #[serde(default)]
pub network_interfaces: BTreeMap<InternedString, NetworkInterfaceInfo>, pub gateways: OrdMap<GatewayId, NetworkInterfaceInfo>,
#[serde(default)] #[serde(default)]
pub acme: BTreeMap<AcmeProvider, AcmeSettings>, pub acme: BTreeMap<AcmeProvider, AcmeSettings>,
#[serde(default)]
pub dns: DnsSettings,
}
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct DnsSettings {
#[ts(type = "string[]")]
pub dhcp_servers: VecDeque<SocketAddr>,
#[ts(type = "string[] | null")]
pub static_servers: Option<VecDeque<SocketAddr>>,
} }
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel, TS)] #[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel, TS)]
@@ -198,13 +223,14 @@ pub struct NetworkInfo {
#[model = "Model<Self>"] #[model = "Model<Self>"]
#[ts(export)] #[ts(export)]
pub struct NetworkInterfaceInfo { pub struct NetworkInterfaceInfo {
pub inbound: Option<bool>, pub name: Option<InternedString>,
pub outbound: Option<bool>, pub public: Option<bool>,
pub ip_info: Option<IpInfo>, pub secure: Option<bool>,
pub ip_info: Option<Arc<IpInfo>>,
} }
impl NetworkInterfaceInfo { impl NetworkInterfaceInfo {
pub fn inbound(&self) -> bool { pub fn public(&self) -> bool {
self.inbound.unwrap_or_else(|| { self.public.unwrap_or_else(|| {
!self.ip_info.as_ref().map_or(true, |ip_info| { !self.ip_info.as_ref().map_or(true, |ip_info| {
let ip4s = ip_info let ip4s = ip_info
.subnets .subnets
@@ -218,11 +244,9 @@ impl NetworkInterfaceInfo {
}) })
.collect::<BTreeSet<_>>(); .collect::<BTreeSet<_>>();
if !ip4s.is_empty() { if !ip4s.is_empty() {
return ip4s.iter().all(|ip4| { return ip4s
ip4.is_loopback() .iter()
|| (ip4.is_private() && !ip4.octets().starts_with(&[10, 59])) // reserving 10.59 for public wireguard configurations .all(|ip4| ip4.is_loopback() || ip4.is_private() || ip4.is_link_local());
|| ip4.is_link_local()
});
} }
ip_info.subnets.iter().all(|ipnet| { ip_info.subnets.iter().all(|ipnet| {
if let IpAddr::V6(ip6) = ipnet.addr() { if let IpAddr::V6(ip6) = ipnet.addr() {
@@ -234,6 +258,14 @@ impl NetworkInterfaceInfo {
}) })
}) })
} }
pub fn secure(&self) -> bool {
self.secure.unwrap_or_else(|| {
self.ip_info.as_ref().map_or(false, |ip_info| {
ip_info.device_type == Some(NetworkInterfaceType::Wireguard)
}) && !self.public()
})
}
} }
#[derive(Clone, Debug, Default, PartialEq, Eq, Deserialize, Serialize, TS, HasModel)] #[derive(Clone, Debug, Default, PartialEq, Eq, Deserialize, Serialize, TS, HasModel)]
@@ -246,19 +278,25 @@ pub struct IpInfo {
pub scope_id: u32, pub scope_id: u32,
pub device_type: Option<NetworkInterfaceType>, pub device_type: Option<NetworkInterfaceType>,
#[ts(type = "string[]")] #[ts(type = "string[]")]
pub subnets: BTreeSet<IpNet>, pub subnets: OrdSet<IpNet>,
#[ts(type = "string[]")]
pub lan_ip: OrdSet<IpAddr>,
pub wan_ip: Option<Ipv4Addr>, pub wan_ip: Option<Ipv4Addr>,
#[ts(type = "string[]")] #[ts(type = "string[]")]
pub ntp_servers: BTreeSet<InternedString>, pub ntp_servers: OrdSet<InternedString>,
#[ts(type = "string[]")]
pub dns_servers: OrdSet<IpAddr>,
} }
#[derive(Clone, Copy, Debug, PartialEq, Eq, Deserialize, Serialize, TS)] #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Deserialize, Serialize, TS)]
#[ts(export)] #[ts(export)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub enum NetworkInterfaceType { pub enum NetworkInterfaceType {
Ethernet, Ethernet,
Wireless, Wireless,
Bridge,
Wireguard, Wireguard,
Loopback,
} }
#[derive(Debug, Deserialize, Serialize, HasModel, TS)] #[derive(Debug, Deserialize, Serialize, HasModel, TS)]
@@ -268,6 +306,27 @@ pub enum NetworkInterfaceType {
pub struct AcmeSettings { pub struct AcmeSettings {
pub contact: Vec<String>, pub contact: Vec<String>,
} }
impl DbAccessByKey<AcmeSettings> for Database {
type Key<'a> = &'a AcmeProvider;
fn access_by_key<'a>(
db: &'a Model<Self>,
key: Self::Key<'_>,
) -> Option<&'a Model<AcmeSettings>> {
db.as_public()
.as_server_info()
.as_network()
.as_acme()
.as_idx(key)
}
}
#[derive(Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")]
#[model = "Model<Self>"]
#[ts(export)]
pub struct DomainSettings {
pub gateway: GatewayId,
}
#[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)] #[derive(Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[model = "Model<Self>"] #[model = "Model<Self>"]

View File

@@ -1,8 +1,10 @@
use std::collections::{BTreeMap, BTreeSet}; use std::collections::{BTreeMap, BTreeSet};
use std::marker::PhantomData; use std::marker::PhantomData;
use std::str::FromStr; use std::str::FromStr;
use std::sync::Arc;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use imbl::OrdMap;
pub use imbl_value::Value; pub use imbl_value::Value;
use patch_db::value::InternedString; use patch_db::value::InternedString;
pub use patch_db::{HasModel, MutateResult, PatchDb}; pub use patch_db::{HasModel, MutateResult, PatchDb};
@@ -166,6 +168,21 @@ impl<T> Model<Option<T>> {
} }
} }
impl<T> Model<Arc<T>> {
pub fn deref(self) -> Model<T> {
use patch_db::ModelExt;
self.transmute(|a| a)
}
pub fn as_deref(&self) -> &Model<T> {
use patch_db::ModelExt;
self.transmute_ref(|a| a)
}
pub fn as_deref_mut(&mut self) -> &mut Model<T> {
use patch_db::ModelExt;
self.transmute_mut(|a| a)
}
}
pub trait Map: DeserializeOwned + Serialize { pub trait Map: DeserializeOwned + Serialize {
type Key; type Key;
type Value; type Value;
@@ -191,11 +208,23 @@ impl<A, B> Map for BTreeMap<JsonKey<A>, B>
where where
A: serde::Serialize + serde::de::DeserializeOwned + Ord, A: serde::Serialize + serde::de::DeserializeOwned + Ord,
B: serde::Serialize + serde::de::DeserializeOwned, B: serde::Serialize + serde::de::DeserializeOwned,
{
type Key = JsonKey<A>;
type Value = B;
fn key_str(key: &Self::Key) -> Result<impl AsRef<str>, Error> {
serde_json::to_string(&key.0).with_kind(ErrorKind::Serialization)
}
}
impl<A, B> Map for OrdMap<A, B>
where
A: serde::Serialize + serde::de::DeserializeOwned + Clone + Ord + AsRef<str>,
B: serde::Serialize + serde::de::DeserializeOwned + Clone,
{ {
type Key = A; type Key = A;
type Value = B; type Value = B;
fn key_str(key: &Self::Key) -> Result<impl AsRef<str>, Error> { fn key_str(key: &Self::Key) -> Result<impl AsRef<str>, Error> {
serde_json::to_string(key).with_kind(ErrorKind::Serialization) Ok(key.as_ref())
} }
} }
@@ -203,13 +232,18 @@ impl<T: Map> Model<T>
where where
T::Value: Serialize, T::Value: Serialize,
{ {
pub fn insert(&mut self, key: &T::Key, value: &T::Value) -> Result<(), Error> { pub fn insert_model(
&mut self,
key: &T::Key,
value: Model<T::Value>,
) -> Result<Option<Model<T::Value>>, Error> {
use patch_db::ModelExt;
use serde::ser::Error; use serde::ser::Error;
let v = patch_db::value::to_value(value)?; let v = value.into_value();
match &mut self.value { match &mut self.value {
Value::Object(o) => { Value::Object(o) => {
o.insert(T::key_string(key)?, v); let prev = o.insert(T::key_string(key)?, v);
Ok(()) Ok(prev.map(|v| Model::from_value(v)))
} }
v => Err(patch_db::value::Error { v => Err(patch_db::value::Error {
source: patch_db::value::ErrorSource::custom(format!("expected object found {v}")), source: patch_db::value::ErrorSource::custom(format!("expected object found {v}")),
@@ -218,6 +252,13 @@ where
.into()), .into()),
} }
} }
pub fn insert(
&mut self,
key: &T::Key,
value: &T::Value,
) -> Result<Option<Model<T::Value>>, Error> {
self.insert_model(key, Model::new(value)?)
}
pub fn upsert<F>(&mut self, key: &T::Key, value: F) -> Result<&mut Model<T::Value>, Error> pub fn upsert<F>(&mut self, key: &T::Key, value: F) -> Result<&mut Model<T::Value>, Error>
where where
F: FnOnce() -> Result<T::Value, Error>, F: FnOnce() -> Result<T::Value, Error>,
@@ -244,22 +285,6 @@ where
.into()), .into()),
} }
} }
pub fn insert_model(&mut self, key: &T::Key, value: Model<T::Value>) -> Result<(), Error> {
use patch_db::ModelExt;
use serde::ser::Error;
let v = value.into_value();
match &mut self.value {
Value::Object(o) => {
o.insert(T::key_string(key)?, v);
Ok(())
}
v => Err(patch_db::value::Error {
source: patch_db::value::ErrorSource::custom(format!("expected object found {v}")),
kind: patch_db::value::ErrorKind::Serialization,
}
.into()),
}
}
} }
impl<T: Map> Model<T> impl<T: Map> Model<T>
@@ -424,6 +449,12 @@ impl<T> std::ops::DerefMut for JsonKey<T> {
&mut self.0 &mut self.0
} }
} }
impl<T: DeserializeOwned> FromStr for JsonKey<T> {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
serde_json::from_str(s).with_kind(ErrorKind::Deserialization)
}
}
impl<T: Serialize> Serialize for JsonKey<T> { impl<T: Serialize> Serialize for JsonKey<T> {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where where
@@ -436,7 +467,7 @@ impl<T: Serialize> Serialize for JsonKey<T> {
} }
} }
// { "foo": "bar" } -> "{ \"foo\": \"bar\" }" // { "foo": "bar" } -> "{ \"foo\": \"bar\" }"
impl<'de, T: Serialize + DeserializeOwned> Deserialize<'de> for JsonKey<T> { impl<'de, T: DeserializeOwned> Deserialize<'de> for JsonKey<T> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where where
D: serde::Deserializer<'de>, D: serde::Deserializer<'de>,

View File

@@ -1,13 +1,14 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::path::Path;
use imbl_value::InternedString; use imbl_value::InternedString;
use models::PackageId; use models::PackageId;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use crate::Error;
use crate::prelude::*; use crate::prelude::*;
use crate::util::PathOrUrl; use crate::util::PathOrUrl;
use crate::Error;
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel, TS)] #[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel, TS)]
#[model = "Model<Self>"] #[model = "Model<Self>"]
@@ -24,20 +25,62 @@ impl Map for Dependencies {
} }
} }
#[derive(Clone, Debug, Deserialize, Serialize, HasModel, TS)] #[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[model = "Model<Self>"] #[model = "Model<Self>"]
#[ts(export)]
pub struct DepInfo { pub struct DepInfo {
pub description: Option<String>, pub description: Option<String>,
pub optional: bool, pub optional: bool,
pub s9pk: Option<PathOrUrl>, #[serde(flatten)]
pub metadata: Option<MetadataSrc>,
}
impl TS for DepInfo {
type WithoutGenerics = Self;
fn decl() -> String {
format!("type {} = {}", Self::name(), Self::inline())
}
fn decl_concrete() -> String {
Self::decl()
}
fn name() -> String {
"DepInfo".into()
}
fn inline() -> String {
"{ description: string | null, optional: boolean } & MetadataSrc".into()
}
fn inline_flattened() -> String {
Self::inline()
}
fn visit_dependencies(v: &mut impl ts_rs::TypeVisitor)
where
Self: 'static,
{
v.visit::<MetadataSrc>()
}
fn output_path() -> Option<&'static std::path::Path> {
Some(Path::new("DepInfo.ts"))
}
}
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub enum MetadataSrc {
Metadata(Metadata),
S9pk(Option<PathOrUrl>), // backwards compatibility
}
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct Metadata {
pub title: InternedString,
pub icon: PathOrUrl,
} }
#[derive(Clone, Debug, Deserialize, Serialize, HasModel, TS)] #[derive(Clone, Debug, Deserialize, Serialize, HasModel, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
#[model = "Model<Self>"] #[model = "Model<Self>"]
#[ts(export)]
pub struct DependencyMetadata { pub struct DependencyMetadata {
#[ts(type = "string")] #[ts(type = "string")]
pub title: InternedString, pub title: InternedString,

View File

@@ -1,40 +1,57 @@
use std::fs::File; use std::path::{Path, PathBuf};
use std::io::Write;
use std::path::Path;
use ed25519::pkcs8::EncodePrivateKey;
use ed25519::PublicKeyBytes; use ed25519::PublicKeyBytes;
use ed25519::pkcs8::EncodePrivateKey;
use ed25519_dalek::{SigningKey, VerifyingKey}; use ed25519_dalek::{SigningKey, VerifyingKey};
use tokio::io::AsyncWriteExt;
use tracing::instrument; use tracing::instrument;
use crate::context::CliContext; use crate::context::CliContext;
use crate::context::config::local_config_path;
use crate::prelude::*; use crate::prelude::*;
use crate::util::io::create_file_mod;
use crate::util::serde::Pem; use crate::util::serde::Pem;
pub const OS_DEVELOPER_KEY_PATH: &str = "/run/startos/developer.key.pem";
pub fn default_developer_key_path() -> PathBuf {
local_config_path()
.as_deref()
.unwrap_or_else(|| Path::new(crate::context::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join("developer.key.pem")
}
pub async fn write_developer_key(
secret: &ed25519_dalek::SigningKey,
path: impl AsRef<Path>,
) -> Result<(), Error> {
let keypair_bytes = ed25519::KeypairBytes {
secret_key: secret.to_bytes(),
public_key: Some(PublicKeyBytes(VerifyingKey::from(secret).to_bytes())),
};
let mut file = create_file_mod(path, 0o640).await?;
file.write_all(
keypair_bytes
.to_pkcs8_pem(base64ct::LineEnding::default())
.with_kind(crate::ErrorKind::Pem)?
.as_bytes(),
)
.await?;
file.sync_all().await?;
Ok(())
}
#[instrument(skip_all)] #[instrument(skip_all)]
pub fn init(ctx: CliContext) -> Result<(), Error> { pub async fn init(ctx: CliContext) -> Result<(), Error> {
if !ctx.developer_key_path.exists() { if tokio::fs::metadata(OS_DEVELOPER_KEY_PATH).await.is_ok() {
let parent = ctx.developer_key_path.parent().unwrap_or(Path::new("/")); println!("Developer key already exists at {}", OS_DEVELOPER_KEY_PATH);
if !parent.exists() { } else if tokio::fs::metadata(&ctx.developer_key_path).await.is_err() {
std::fs::create_dir_all(parent)
.with_ctx(|_| (crate::ErrorKind::Filesystem, parent.display().to_string()))?;
}
tracing::info!("Generating new developer key..."); tracing::info!("Generating new developer key...");
let secret = SigningKey::generate(&mut ssh_key::rand_core::OsRng::default()); let secret = SigningKey::generate(&mut ssh_key::rand_core::OsRng::default());
tracing::info!("Writing key to {}", ctx.developer_key_path.display()); tracing::info!("Writing key to {}", ctx.developer_key_path.display());
let keypair_bytes = ed25519::KeypairBytes { write_developer_key(&secret, &ctx.developer_key_path).await?;
secret_key: secret.to_bytes(),
public_key: Some(PublicKeyBytes(VerifyingKey::from(&secret).to_bytes())),
};
let mut dev_key_file = File::create(&ctx.developer_key_path)
.with_ctx(|_| (ErrorKind::Filesystem, ctx.developer_key_path.display()))?;
dev_key_file.write_all(
keypair_bytes
.to_pkcs8_pem(base64ct::LineEnding::default())
.with_kind(crate::ErrorKind::Pem)?
.as_bytes(),
)?;
dev_key_file.sync_all()?;
println!( println!(
"New developer key generated at {}", "New developer key generated at {}",
ctx.developer_key_path.display() ctx.developer_key_path.display()

View File

@@ -1,9 +1,8 @@
use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{ use rpc_toolkit::{
from_fn, from_fn_async, CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler, CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler, from_fn, from_fn_async,
}; };
use crate::context::{CliContext, DiagnosticContext, RpcContext}; use crate::context::{CliContext, DiagnosticContext, RpcContext};
@@ -12,7 +11,6 @@ use crate::init::SYSTEM_REBUILD_PATH;
use crate::prelude::*; use crate::prelude::*;
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::util::io::delete_file; use crate::util::io::delete_file;
use crate::DATA_DIR;
pub fn diagnostic<C: Context>() -> ParentHandler<C> { pub fn diagnostic<C: Context>() -> ParentHandler<C> {
ParentHandler::new() ParentHandler::new()
@@ -70,10 +68,7 @@ pub fn error(ctx: DiagnosticContext) -> Result<Arc<RpcError>, Error> {
pub fn restart(ctx: DiagnosticContext) -> Result<(), Error> { pub fn restart(ctx: DiagnosticContext) -> Result<(), Error> {
ctx.shutdown ctx.shutdown
.send(Shutdown { .send(Shutdown {
export_args: ctx disk_guid: ctx.disk_guid.clone(),
.disk_guid
.clone()
.map(|guid| (guid, Path::new(DATA_DIR).to_owned())),
restart: true, restart: true,
}) })
.map_err(|_| eyre!("receiver dropped")) .map_err(|_| eyre!("receiver dropped"))

View File

@@ -4,9 +4,9 @@ use std::path::Path;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use crate::Error;
use crate::disk::fsck::RequiresReboot; use crate::disk::fsck::RequiresReboot;
use crate::util::Invoke; use crate::util::Invoke;
use crate::Error;
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn btrfs_check_readonly(logicalname: impl AsRef<Path>) -> Result<RequiresReboot, Error> { pub async fn btrfs_check_readonly(logicalname: impl AsRef<Path>) -> Result<RequiresReboot, Error> {

View File

@@ -2,13 +2,13 @@ use std::ffi::OsStr;
use std::path::Path; use std::path::Path;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::future::BoxFuture;
use futures::FutureExt; use futures::FutureExt;
use futures::future::BoxFuture;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use crate::disk::fsck::RequiresReboot;
use crate::Error; use crate::Error;
use crate::disk::fsck::RequiresReboot;
#[instrument(skip_all)] #[instrument(skip_all)]
pub async fn e2fsck_preen( pub async fn e2fsck_preen(

View File

@@ -3,10 +3,10 @@ use std::path::Path;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use tokio::process::Command; use tokio::process::Command;
use crate::Error;
use crate::disk::fsck::btrfs::{btrfs_check_readonly, btrfs_check_repair}; use crate::disk::fsck::btrfs::{btrfs_check_readonly, btrfs_check_repair};
use crate::disk::fsck::ext4::{e2fsck_aggressive, e2fsck_preen}; use crate::disk::fsck::ext4::{e2fsck_aggressive, e2fsck_preen};
use crate::util::Invoke; use crate::util::Invoke;
use crate::Error;
pub mod btrfs; pub mod btrfs;
pub mod ext4; pub mod ext4;
@@ -45,7 +45,7 @@ impl RepairStrategy {
return Err(Error::new( return Err(Error::new(
eyre!("Unknown filesystem {fs}"), eyre!("Unknown filesystem {fs}"),
crate::ErrorKind::DiskManagement, crate::ErrorKind::DiskManagement,
)) ));
} }
} }
} }

View File

@@ -2,13 +2,13 @@ use std::path::{Path, PathBuf};
use itertools::Itertools; use itertools::Itertools;
use lazy_format::lazy_format; use lazy_format::lazy_format;
use rpc_toolkit::{from_fn_async, CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler}; use rpc_toolkit::{CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::Error;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::disk::util::DiskInfo; use crate::disk::util::DiskInfo;
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat}; use crate::util::serde::{HandlerExtSerde, WithIoFormat, display_serializable};
use crate::Error;
pub mod fsck; pub mod fsck;
pub mod main; pub mod main;
@@ -96,14 +96,13 @@ fn display_disk_info(params: WithIoFormat<Empty>, args: Vec<DiskInfo>) -> Result
"N/A" "N/A"
}, },
part.capacity, part.capacity,
if let Some(used) = part &if let Some(used) = part
.used .used
.map(|u| format!("{:.2} GiB", u as f64 / 1024.0 / 1024.0 / 1024.0)) .map(|u| format!("{:.2} GiB", u as f64 / 1024.0 / 1024.0 / 1024.0))
.as_ref()
{ {
used used
} else { } else {
"N/A" "N/A".to_owned()
}, },
&if part.start_os.is_empty() { &if part.start_os.is_empty() {
"N/A".to_owned() "N/A".to_owned()

View File

@@ -10,8 +10,8 @@ use tracing::instrument;
use super::guard::{GenericMountGuard, TmpMountGuard}; use super::guard::{GenericMountGuard, TmpMountGuard};
use crate::auth::check_password; use crate::auth::check_password;
use crate::backup::target::BackupInfo; use crate::backup::target::BackupInfo;
use crate::disk::mount::filesystem::backupfs::BackupFS;
use crate::disk::mount::filesystem::ReadWrite; use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::filesystem::backupfs::BackupFS;
use crate::disk::mount::guard::SubPath; use crate::disk::mount::guard::SubPath;
use crate::disk::util::StartOsRecoveryInfo; use crate::disk::util::StartOsRecoveryInfo;
use crate::util::crypto::{decrypt_slice, encrypt_slice}; use crate::util::crypto::{decrypt_slice, encrypt_slice};

View File

@@ -11,9 +11,9 @@ use tracing::instrument;
use ts_rs::TS; use ts_rs::TS;
use super::{FileSystem, MountType, ReadOnly}; use super::{FileSystem, MountType, ReadOnly};
use crate::Error;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard}; use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::util::Invoke; use crate::util::Invoke;
use crate::Error;
async fn resolve_hostname(hostname: &str) -> Result<IpAddr, Error> { async fn resolve_hostname(hostname: &str) -> Result<IpAddr, Error> {
if let Ok(addr) = hostname.parse() { if let Ok(addr) = hostname.parse() {

View File

@@ -7,8 +7,8 @@ use serde::{Deserialize, Serialize};
use sha2::Sha256; use sha2::Sha256;
use super::{FileSystem, MountType}; use super::{FileSystem, MountType};
use crate::util::Invoke;
use crate::Error; use crate::Error;
use crate::util::Invoke;
pub async fn mount_httpdirfs(url: &Url, mountpoint: impl AsRef<Path>) -> Result<(), Error> { pub async fn mount_httpdirfs(url: &Url, mountpoint: impl AsRef<Path>) -> Result<(), Error> {
tokio::fs::create_dir_all(mountpoint.as_ref()).await?; tokio::fs::create_dir_all(mountpoint.as_ref()).await?;

View File

@@ -80,23 +80,6 @@ impl<Fs: FileSystem> FileSystem for IdMapped<Fs> {
} }
Ok(()) Ok(())
} }
async fn mount<P: AsRef<Path> + Send>(
&self,
mountpoint: P,
mount_type: MountType,
) -> Result<(), Error> {
self.pre_mount(mountpoint.as_ref()).await?;
Command::new("mount.next")
.args(
default_mount_command(self, mountpoint, mount_type)
.await?
.get_args(),
)
.invoke(ErrorKind::Filesystem)
.await?;
Ok(())
}
async fn source_hash( async fn source_hash(
&self, &self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> { ) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> {

View File

@@ -2,8 +2,8 @@ use std::ffi::OsStr;
use std::fmt::{Display, Write}; use std::fmt::{Display, Write};
use std::path::Path; use std::path::Path;
use digest::generic_array::GenericArray;
use digest::OutputSizeUser; use digest::OutputSizeUser;
use digest::generic_array::GenericArray;
use futures::Future; use futures::Future;
use sha2::Sha256; use sha2::Sha256;
use tokio::process::Command; use tokio::process::Command;
@@ -106,6 +106,7 @@ pub trait FileSystem: Send + Sync {
} }
fn source_hash( fn source_hash(
&self, &self,
) -> impl Future<Output = Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error>> ) -> impl Future<
+ Send; Output = Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error>,
> + Send;
} }

View File

@@ -21,11 +21,8 @@ impl<P0: AsRef<Path>, P1: AsRef<Path>, P2: AsRef<Path>> OverlayFs<P0, P1, P2> {
Self { lower, upper, work } Self { lower, upper, work }
} }
} }
impl< impl<P0: AsRef<Path> + Send + Sync, P1: AsRef<Path> + Send + Sync, P2: AsRef<Path> + Send + Sync>
P0: AsRef<Path> + Send + Sync, FileSystem for OverlayFs<P0, P1, P2>
P1: AsRef<Path> + Send + Sync,
P2: AsRef<Path> + Send + Sync,
> FileSystem for OverlayFs<P0, P1, P2>
{ {
fn mount_type(&self) -> Option<impl AsRef<str>> { fn mount_type(&self) -> Option<impl AsRef<str>> {
Some("overlay") Some("overlay")

View File

@@ -10,8 +10,8 @@ use tracing::instrument;
use super::filesystem::{FileSystem, MountType, ReadOnly, ReadWrite}; use super::filesystem::{FileSystem, MountType, ReadOnly, ReadWrite};
use super::util::unmount; use super::util::unmount;
use crate::util::{Invoke, Never};
use crate::Error; use crate::Error;
use crate::util::{Invoke, Never};
pub const TMP_MOUNTPOINT: &'static str = "/media/startos/tmp"; pub const TMP_MOUNTPOINT: &'static str = "/media/startos/tmp";
@@ -74,7 +74,7 @@ impl MountGuard {
} }
pub async fn unmount(mut self, delete_mountpoint: bool) -> Result<(), Error> { pub async fn unmount(mut self, delete_mountpoint: bool) -> Result<(), Error> {
if self.mounted { if self.mounted {
unmount(&self.mountpoint, false).await?; unmount(&self.mountpoint, !cfg!(feature = "unstable")).await?;
if delete_mountpoint { if delete_mountpoint {
match tokio::fs::remove_dir(&self.mountpoint).await { match tokio::fs::remove_dir(&self.mountpoint).await {
Err(e) if e.raw_os_error() == Some(39) => Ok(()), // directory not empty Err(e) if e.raw_os_error() == Some(39) => Ok(()), // directory not empty

View File

@@ -2,8 +2,8 @@ use std::path::Path;
use tracing::instrument; use tracing::instrument;
use crate::util::Invoke;
use crate::Error; use crate::Error;
use crate::util::Invoke;
pub async fn is_mountpoint(path: impl AsRef<Path>) -> Result<bool, Error> { pub async fn is_mountpoint(path: impl AsRef<Path>) -> Result<bool, Error> {
let is_mountpoint = tokio::process::Command::new("mountpoint") let is_mountpoint = tokio::process::Command::new("mountpoint")
@@ -48,7 +48,6 @@ pub async fn bind<P0: AsRef<Path>, P1: AsRef<Path>>(
pub async fn unmount<P: AsRef<Path>>(mountpoint: P, lazy: bool) -> Result<(), Error> { pub async fn unmount<P: AsRef<Path>>(mountpoint: P, lazy: bool) -> Result<(), Error> {
tracing::debug!("Unmounting {}.", mountpoint.as_ref().display()); tracing::debug!("Unmounting {}.", mountpoint.as_ref().display());
let mut cmd = tokio::process::Command::new("umount"); let mut cmd = tokio::process::Command::new("umount");
cmd.arg("-R");
if lazy { if lazy {
cmd.arg("-l"); cmd.arg("-l");
} }

View File

@@ -14,14 +14,14 @@ use serde::{Deserialize, Serialize};
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use super::mount::filesystem::block_dev::BlockDev;
use super::mount::filesystem::ReadOnly; use super::mount::filesystem::ReadOnly;
use super::mount::filesystem::block_dev::BlockDev;
use super::mount::guard::TmpMountGuard; use super::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::GenericMountGuard;
use crate::disk::OsPartitionInfo; use crate::disk::OsPartitionInfo;
use crate::disk::mount::guard::GenericMountGuard;
use crate::hostname::Hostname; use crate::hostname::Hostname;
use crate::util::serde::IoFormat;
use crate::util::Invoke; use crate::util::Invoke;
use crate::util::serde::IoFormat;
use crate::{Error, ResultExt as _}; use crate::{Error, ResultExt as _};
#[derive(Clone, Copy, Debug, Deserialize, Serialize)] #[derive(Clone, Copy, Debug, Deserialize, Serialize)]
@@ -280,6 +280,9 @@ pub async fn list(os: &OsPartitionInfo) -> Result<Vec<DiskInfo>, Error> {
.try_fold( .try_fold(
BTreeMap::<PathBuf, DiskIndex>::new(), BTreeMap::<PathBuf, DiskIndex>::new(),
|mut disks, dir_entry| async move { |mut disks, dir_entry| async move {
if dir_entry.file_type().await?.is_dir() {
return Ok(disks);
}
if let Some(disk_path) = dir_entry.path().file_name().and_then(|s| s.to_str()) { if let Some(disk_path) = dir_entry.path().file_name().and_then(|s| s.to_str()) {
let (disk_path, part_path) = if let Some(end) = PARTITION_REGEX.find(disk_path) { let (disk_path, part_path) = if let Some(end) = PARTITION_REGEX.find(disk_path) {
( (

View File

@@ -6,11 +6,11 @@ use serde::{Deserialize, Serialize};
use tokio::io::BufReader; use tokio::io::BufReader;
use tokio::process::Command; use tokio::process::Command;
use crate::PLATFORM;
use crate::disk::fsck::RequiresReboot; use crate::disk::fsck::RequiresReboot;
use crate::prelude::*; use crate::prelude::*;
use crate::util::io::open_file;
use crate::util::Invoke; use crate::util::Invoke;
use crate::PLATFORM; use crate::util::io::open_file;
/// Part of the Firmware, look there for more about /// Part of the Firmware, look there for more about
#[derive(Debug, Clone, Deserialize, Serialize)] #[derive(Debug, Clone, Deserialize, Serialize)]

View File

@@ -1,6 +1,6 @@
use imbl_value::InternedString; use imbl_value::InternedString;
use lazy_format::lazy_format; use lazy_format::lazy_format;
use rand::{rng, Rng}; use rand::{Rng, rng};
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
@@ -35,8 +35,8 @@ impl Hostname {
pub fn generate_hostname() -> Hostname { pub fn generate_hostname() -> Hostname {
let mut rng = rng(); let mut rng = rng();
let adjective = &ADJECTIVES[rng.gen_range(0..ADJECTIVES.len())]; let adjective = &ADJECTIVES[rng.random_range(0..ADJECTIVES.len())];
let noun = &NOUNS[rng.gen_range(0..NOUNS.len())]; let noun = &NOUNS[rng.random_range(0..NOUNS.len())];
Hostname(InternedString::from_display(&lazy_format!( Hostname(InternedString::from_display(&lazy_format!(
"{adjective}-{noun}" "{adjective}-{noun}"
))) )))

View File

@@ -1,19 +1,14 @@
use std::fs::Permissions;
use std::io::Cursor; use std::io::Cursor;
use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4};
use std::os::unix::fs::PermissionsExt;
use std::path::Path; use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, SystemTime}; use std::time::{Duration, SystemTime};
use axum::extract::ws::{self}; use axum::extract::ws;
use color_eyre::eyre::eyre;
use const_format::formatcp; use const_format::formatcp;
use futures::{StreamExt, TryStreamExt}; use futures::{StreamExt, TryStreamExt};
use itertools::Itertools; use itertools::Itertools;
use models::ResultExt; use models::ResultExt;
use rand::random; use rpc_toolkit::{Context, Empty, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use rpc_toolkit::{from_fn_async, Context, Empty, HandlerArgs, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
@@ -21,15 +16,17 @@ use ts_rs::TS;
use crate::account::AccountInfo; use crate::account::AccountInfo;
use crate::context::config::ServerConfig; use crate::context::config::ServerConfig;
use crate::context::{CliContext, InitContext}; use crate::context::{CliContext, InitContext, RpcContext};
use crate::db::model::public::ServerStatus;
use crate::db::model::Database; use crate::db::model::Database;
use crate::disk::mount::util::unmount; use crate::db::model::public::ServerStatus;
use crate::developer::OS_DEVELOPER_KEY_PATH;
use crate::hostname::Hostname; use crate::hostname::Hostname;
use crate::middleware::auth::LOCAL_AUTH_COOKIE_PATH; use crate::middleware::auth::AuthContext;
use crate::net::gateway::UpgradableListener;
use crate::net::net_controller::{NetController, NetService}; use crate::net::net_controller::{NetController, NetService};
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
use crate::net::utils::find_wifi_iface; use crate::net::utils::find_wifi_iface;
use crate::net::web_server::{UpgradableListener, WebServerAcceptorSetter}; use crate::net::web_server::WebServerAcceptorSetter;
use crate::prelude::*; use crate::prelude::*;
use crate::progress::{ use crate::progress::{
FullProgress, FullProgressTracker, PhaseProgressTrackerHandle, PhasedProgressBar, ProgressUnits, FullProgress, FullProgressTracker, PhaseProgressTrackerHandle, PhasedProgressBar, ProgressUnits,
@@ -38,10 +35,10 @@ use crate::rpc_continuations::{Guid, RpcContinuation};
use crate::s9pk::v2::pack::{CONTAINER_DATADIR, CONTAINER_TOOL}; use crate::s9pk::v2::pack::{CONTAINER_DATADIR, CONTAINER_TOOL};
use crate::ssh::SSH_DIR; use crate::ssh::SSH_DIR;
use crate::system::{get_mem_info, sync_kiosk}; use crate::system::{get_mem_info, sync_kiosk};
use crate::util::io::{create_file, open_file, IOHook}; use crate::util::io::{IOHook, open_file};
use crate::util::lshw::lshw; use crate::util::lshw::lshw;
use crate::util::net::WebSocketExt; use crate::util::net::WebSocketExt;
use crate::util::{cpupower, Invoke}; use crate::util::{Invoke, cpupower};
use crate::{Error, MAIN_DATA, PACKAGE_DATA}; use crate::{Error, MAIN_DATA, PACKAGE_DATA};
pub const SYSTEM_REBUILD_PATH: &str = "/media/startos/config/system-rebuild"; pub const SYSTEM_REBUILD_PATH: &str = "/media/startos/config/system-rebuild";
@@ -167,28 +164,7 @@ pub async fn init(
} }
local_auth.start(); local_auth.start();
tokio::fs::create_dir_all("/run/startos") RpcContext::init_auth_cookie().await?;
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, "mkdir -p /run/startos"))?;
if tokio::fs::metadata(LOCAL_AUTH_COOKIE_PATH).await.is_err() {
tokio::fs::write(
LOCAL_AUTH_COOKIE_PATH,
base64::encode(random::<[u8; 32]>()).as_bytes(),
)
.await
.with_ctx(|_| {
(
crate::ErrorKind::Filesystem,
format!("write {}", LOCAL_AUTH_COOKIE_PATH),
)
})?;
tokio::fs::set_permissions(LOCAL_AUTH_COOKIE_PATH, Permissions::from_mode(0o046)).await?;
Command::new("chown")
.arg("root:startos")
.arg(LOCAL_AUTH_COOKIE_PATH)
.invoke(crate::ErrorKind::Filesystem)
.await?;
}
local_auth.complete(); local_auth.complete();
load_database.start(); load_database.start();
@@ -199,6 +175,16 @@ pub async fn init(
load_database.complete(); load_database.complete();
load_ssh_keys.start(); load_ssh_keys.start();
crate::developer::write_developer_key(
&peek.as_private().as_developer_key().de()?.0,
OS_DEVELOPER_KEY_PATH,
)
.await?;
Command::new("chown")
.arg("root:startos")
.arg(OS_DEVELOPER_KEY_PATH)
.invoke(ErrorKind::Filesystem)
.await?;
crate::ssh::sync_keys( crate::ssh::sync_keys(
&Hostname(peek.as_public().as_server_info().as_hostname().de()?), &Hostname(peek.as_public().as_server_info().as_hostname().de()?),
&peek.as_private().as_ssh_privkey().de()?, &peek.as_private().as_ssh_privkey().de()?,
@@ -206,6 +192,13 @@ pub async fn init(
SSH_DIR, SSH_DIR,
) )
.await?; .await?;
crate::ssh::sync_keys(
&Hostname(peek.as_public().as_server_info().as_hostname().de()?),
&peek.as_private().as_ssh_privkey().de()?,
&Default::default(),
"/root/.ssh",
)
.await?;
load_ssh_keys.complete(); load_ssh_keys.complete();
let account = AccountInfo::load(&peek)?; let account = AccountInfo::load(&peek)?;
@@ -214,17 +207,12 @@ pub async fn init(
let net_ctrl = Arc::new( let net_ctrl = Arc::new(
NetController::init( NetController::init(
db.clone(), db.clone(),
cfg.tor_control
.unwrap_or(SocketAddr::from(([127, 0, 0, 1], 9051))),
cfg.tor_socks.unwrap_or(SocketAddr::V4(SocketAddrV4::new(
Ipv4Addr::new(127, 0, 0, 1),
9050,
))),
&account.hostname, &account.hostname,
cfg.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN),
) )
.await?, .await?,
); );
webserver.try_upgrade(|a| net_ctrl.net_iface.upgrade_listener(a))?; webserver.try_upgrade(|a| net_ctrl.net_iface.watcher.upgrade_listener(a))?;
let os_net_service = net_ctrl.os_bindings().await?; let os_net_service = net_ctrl.os_bindings().await?;
start_net.complete(); start_net.complete();
@@ -260,7 +248,8 @@ pub async fn init(
Command::new("killall") Command::new("killall")
.arg("journalctl") .arg("journalctl")
.invoke(crate::ErrorKind::Journald) .invoke(crate::ErrorKind::Journald)
.await?; .await
.log_err();
mount_logs.complete(); mount_logs.complete();
tokio::io::copy( tokio::io::copy(
&mut open_file("/run/startos/init.log").await?, &mut open_file("/run/startos/init.log").await?,
@@ -508,14 +497,7 @@ pub async fn init_progress(ctx: InitContext) -> Result<InitProgressRes, Error> {
} }
); );
if let Err(e) = ws if let Err(e) = ws.close_result(res.map(|_| "complete")).await {
.close_result(res.map(|_| "complete").map_err(|e| {
tracing::error!("error in init progress websocket: {e}");
tracing::debug!("{e:?}");
e
}))
.await
{
tracing::error!("error closing init progress websocket: {e}"); tracing::error!("error closing init progress websocket: {e}");
tracing::debug!("{e:?}"); tracing::debug!("{e:?}");
} }

View File

@@ -4,18 +4,17 @@ use std::time::Duration;
use axum::extract::ws; use axum::extract::ws;
use clap::builder::ValueParserFactory; use clap::builder::ValueParserFactory;
use clap::{value_parser, CommandFactory, FromArgMatches, Parser}; use clap::{CommandFactory, FromArgMatches, Parser, value_parser};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use exver::VersionRange; use exver::VersionRange;
use futures::{AsyncWriteExt, StreamExt}; use futures::StreamExt;
use imbl_value::{json, InternedString}; use imbl_value::{InternedString, json};
use itertools::Itertools; use itertools::Itertools;
use models::{FromStrParser, VersionString}; use models::{FromStrParser, VersionString};
use reqwest::header::{HeaderMap, CONTENT_LENGTH};
use reqwest::Url; use reqwest::Url;
use rpc_toolkit::yajrc::RpcError; use reqwest::header::{CONTENT_LENGTH, HeaderMap};
use rpc_toolkit::HandlerArgs; use rpc_toolkit::HandlerArgs;
use rustyline_async::ReadlineEvent; use rpc_toolkit::yajrc::RpcError;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::sync::oneshot; use tokio::sync::oneshot;
use tokio_tungstenite::tungstenite::protocol::frame::coding::CloseCode; use tokio_tungstenite::tungstenite::protocol::frame::coding::CloseCode;
@@ -31,9 +30,10 @@ use crate::registry::package::get::GetPackageResponse;
use crate::rpc_continuations::{Guid, RpcContinuation}; use crate::rpc_continuations::{Guid, RpcContinuation};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::upload::upload; use crate::upload::upload;
use crate::util::Never;
use crate::util::io::open_file; use crate::util::io::open_file;
use crate::util::net::WebSocketExt; use crate::util::net::WebSocketExt;
use crate::util::Never; use crate::util::tui::choose;
pub const PKG_ARCHIVE_DIR: &str = "package-data/archive"; pub const PKG_ARCHIVE_DIR: &str = "package-data/archive";
pub const PKG_PUBLIC_DIR: &str = "package-data/public"; pub const PKG_PUBLIC_DIR: &str = "package-data/public";
@@ -175,7 +175,7 @@ pub async fn install(
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
pub struct SideloadParams { pub struct SideloadParams {
#[ts(skip)] #[ts(skip)]
#[serde(rename = "__auth_session")] #[serde(rename = "__Auth_session")]
session: Option<InternedString>, session: Option<InternedString>,
} }
@@ -253,7 +253,7 @@ pub async fn sideload(
.await; .await;
tokio::spawn(async move { tokio::spawn(async move {
if let Err(e) = async { if let Err(e) = async {
let key = ctx.db.peek().await.into_private().into_compat_s9pk_key(); let key = ctx.db.peek().await.into_private().into_developer_key();
ctx.services ctx.services
.install( .install(
@@ -483,45 +483,19 @@ pub async fn cli_install(
let version = if packages.best.len() == 1 { let version = if packages.best.len() == 1 {
packages.best.pop_first().map(|(k, _)| k).unwrap() packages.best.pop_first().map(|(k, _)| k).unwrap()
} else { } else {
println!("Multiple flavors of {id} found. Please select one of the following versions to install:"); let versions = packages.best.keys().collect::<Vec<_>>();
let version; let version = choose(
loop { &format!(
let (mut read, mut output) = rustyline_async::Readline::new("> ".into()) concat!(
.with_kind(ErrorKind::Filesystem)?; "Multiple flavors of {id} found. ",
for (idx, version) in packages.best.keys().enumerate() { "Please select one of the following versions to install:"
output ),
.write_all(format!(" {}) {}\n", idx + 1, version).as_bytes()) id = id
.await?; ),
read.add_history_entry(version.to_string()); &versions,
} )
if let ReadlineEvent::Line(line) = read.readline().await? { .await?;
let trimmed = line.trim(); (*version).clone()
match trimmed.parse() {
Ok(v) => {
if let Some((k, _)) = packages.best.remove_entry(&v) {
version = k;
break;
}
}
Err(_) => match trimmed.parse::<usize>() {
Ok(i) if (1..=packages.best.len()).contains(&i) => {
version = packages.best.keys().nth(i - 1).unwrap().clone();
break;
}
_ => (),
},
}
eprintln!("invalid selection: {trimmed}");
println!("Please select one of the following versions to install:");
} else {
return Err(Error::new(
eyre!("Could not determine precise version to install"),
ErrorKind::InvalidRequest,
)
.into());
}
}
version
}; };
ctx.call_remote::<RpcContext>( ctx.call_remote::<RpcContext>(
&method.join("."), &method.join("."),

View File

@@ -60,10 +60,12 @@ pub mod s9pk;
pub mod service; pub mod service;
pub mod setup; pub mod setup;
pub mod shutdown; pub mod shutdown;
pub mod sign;
pub mod sound; pub mod sound;
pub mod ssh; pub mod ssh;
pub mod status; pub mod status;
pub mod system; pub mod system;
pub mod tunnel;
pub mod update; pub mod update;
pub mod upload; pub mod upload;
pub mod util; pub mod util;
@@ -77,19 +79,18 @@ pub use error::{Error, ErrorKind, ResultExt};
use imbl_value::Value; use imbl_value::Value;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{ use rpc_toolkit::{
from_fn, from_fn_async, from_fn_blocking, CallRemoteHandler, Context, Empty, HandlerExt, CallRemoteHandler, Context, Empty, HandlerExt, ParentHandler, from_fn, from_fn_async,
ParentHandler, from_fn_async_local, from_fn_blocking,
}; };
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use ts_rs::TS; use ts_rs::TS;
use crate::context::{ use crate::context::{CliContext, DiagnosticContext, InitContext, RpcContext};
CliContext, DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext,
};
use crate::disk::fsck::RequiresReboot; use crate::disk::fsck::RequiresReboot;
use crate::registry::context::{RegistryContext, RegistryUrlParams}; use crate::registry::context::{RegistryContext, RegistryUrlParams};
use crate::system::kiosk; use crate::system::kiosk;
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat}; use crate::tunnel::context::TunnelUrlParams;
use crate::util::serde::{HandlerExtSerde, WithIoFormat, display_serializable};
#[derive(Deserialize, Serialize, Parser, TS)] #[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")] #[serde(rename_all = "camelCase")]
@@ -137,6 +138,20 @@ pub fn main_api<C: Context>() -> ParentHandler<C> {
.with_about("Display the API that is currently serving") .with_about("Display the API that is currently serving")
.with_call_remote::<CliContext>(), .with_call_remote::<CliContext>(),
) )
.subcommand(
"state",
from_fn(|_: InitContext| Ok::<_, Error>(ApiState::Initializing))
.with_metadata("authenticated", Value::Bool(false))
.with_about("Display the API that is currently serving")
.with_call_remote::<CliContext>(),
)
.subcommand(
"state",
from_fn(|_: DiagnosticContext| Ok::<_, Error>(ApiState::Error))
.with_metadata("authenticated", Value::Bool(false))
.with_about("Display the API that is currently serving")
.with_call_remote::<CliContext>(),
)
.subcommand( .subcommand(
"server", "server",
server::<C>() server::<C>()
@@ -148,13 +163,12 @@ pub fn main_api<C: Context>() -> ParentHandler<C> {
) )
.subcommand( .subcommand(
"net", "net",
net::net::<C>().with_about("Network commands related to tor and dhcp"), net::net_api::<C>().with_about("Network commands related to tor and dhcp"),
) )
.subcommand( .subcommand(
"auth", "auth",
auth::auth::<C>().with_about( auth::auth::<C, RpcContext>()
"Commands related to Authentication i.e. login, logout, reset-password", .with_about("Commands related to Authentication i.e. login, logout"),
),
) )
.subcommand( .subcommand(
"db", "db",
@@ -190,6 +204,19 @@ pub fn main_api<C: Context>() -> ParentHandler<C> {
) )
.no_cli(), .no_cli(),
) )
.subcommand(
"registry",
registry::registry_api::<CliContext>().with_about("Commands related to the registry"),
)
.subcommand(
"tunnel",
CallRemoteHandler::<RpcContext, _, _, TunnelUrlParams>::new(tunnel::api::tunnel_api())
.no_cli(),
)
.subcommand(
"tunnel",
tunnel::api::tunnel_api::<CliContext>().with_about("Commands related to StartTunnel"),
)
.subcommand( .subcommand(
"s9pk", "s9pk",
s9pk::rpc::s9pk().with_about("Commands for interacting with s9pk files"), s9pk::rpc::s9pk().with_about("Commands for interacting with s9pk files"),
@@ -197,6 +224,29 @@ pub fn main_api<C: Context>() -> ParentHandler<C> {
.subcommand( .subcommand(
"util", "util",
util::rpc::util::<C>().with_about("Command for calculating the blake3 hash of a file"), util::rpc::util::<C>().with_about("Command for calculating the blake3 hash of a file"),
)
.subcommand(
"init-key",
from_fn_async(developer::init)
.no_display()
.with_about("Create developer key if it doesn't exist"),
)
.subcommand(
"pubkey",
from_fn_blocking(developer::pubkey)
.with_about("Get public key for developer private key"),
)
.subcommand(
"diagnostic",
diagnostic::diagnostic::<C>()
.with_about("Commands to display logs, restart the server, etc"),
)
.subcommand("init", init::init_api::<C>())
.subcommand("setup", setup::setup::<C>())
.subcommand(
"install",
os_install::install::<C>()
.with_about("Commands to list disk info, install StartOS, and reboot"),
); );
if &*PLATFORM != "raspberrypi" { if &*PLATFORM != "raspberrypi" {
api = api.subcommand("kiosk", kiosk::<C>()); api = api.subcommand("kiosk", kiosk::<C>());
@@ -342,7 +392,7 @@ pub fn package<C: Context>() -> ParentHandler<C> {
) )
.subcommand( .subcommand(
"install", "install",
from_fn_async(install::cli_install) from_fn_async_local(install::cli_install)
.no_display() .no_display()
.with_about("Install a package from a marketplace or via sideloading"), .with_about("Install a package from a marketplace or via sideloading"),
) )
@@ -463,13 +513,6 @@ pub fn package<C: Context>() -> ParentHandler<C> {
backup::package_backup::<C>() backup::package_backup::<C>()
.with_about("Commands for restoring package(s) from backup"), .with_about("Commands for restoring package(s) from backup"),
) )
.subcommand("connect", from_fn_async(service::connect_rpc).no_cli())
.subcommand(
"connect",
from_fn_async(service::connect_rpc_cli)
.no_display()
.with_about("Connect to a LXC container"),
)
.subcommand( .subcommand(
"attach", "attach",
from_fn_async(service::attach) from_fn_async(service::attach)
@@ -483,127 +526,3 @@ pub fn package<C: Context>() -> ParentHandler<C> {
net::host::host_api::<C>().with_about("Manage network hosts for a package"), net::host::host_api::<C>().with_about("Manage network hosts for a package"),
) )
} }
pub fn diagnostic_api() -> ParentHandler<DiagnosticContext> {
ParentHandler::new()
.subcommand(
"git-info",
from_fn(|_: DiagnosticContext| version::git_info())
.with_metadata("authenticated", Value::Bool(false))
.with_about("Display the githash of StartOS CLI"),
)
.subcommand(
"echo",
from_fn(echo::<DiagnosticContext>)
.with_about("Echo a message")
.with_call_remote::<CliContext>(),
)
.subcommand(
"state",
from_fn(|_: DiagnosticContext| Ok::<_, Error>(ApiState::Error))
.with_metadata("authenticated", Value::Bool(false))
.with_about("Display the API that is currently serving")
.with_call_remote::<CliContext>(),
)
.subcommand(
"diagnostic",
diagnostic::diagnostic::<DiagnosticContext>()
.with_about("Diagnostic commands i.e. logs, restart, rebuild"),
)
}
pub fn init_api() -> ParentHandler<InitContext> {
ParentHandler::new()
.subcommand(
"git-info",
from_fn(|_: InitContext| version::git_info())
.with_metadata("authenticated", Value::Bool(false))
.with_about("Display the githash of StartOS CLI"),
)
.subcommand(
"echo",
from_fn(echo::<InitContext>)
.with_about("Echo a message")
.with_call_remote::<CliContext>(),
)
.subcommand(
"state",
from_fn(|_: InitContext| Ok::<_, Error>(ApiState::Initializing))
.with_metadata("authenticated", Value::Bool(false))
.with_about("Display the API that is currently serving")
.with_call_remote::<CliContext>(),
)
.subcommand(
"init",
init::init_api::<InitContext>()
.with_about("Commands to get logs or initialization progress"),
)
}
pub fn setup_api() -> ParentHandler<SetupContext> {
ParentHandler::new()
.subcommand(
"git-info",
from_fn(|_: SetupContext| version::git_info())
.with_metadata("authenticated", Value::Bool(false))
.with_about("Display the githash of StartOS CLI"),
)
.subcommand(
"echo",
from_fn(echo::<SetupContext>)
.with_about("Echo a message")
.with_call_remote::<CliContext>(),
)
.subcommand("setup", setup::setup::<SetupContext>())
}
pub fn install_api() -> ParentHandler<InstallContext> {
ParentHandler::new()
.subcommand(
"git-info",
from_fn(|_: InstallContext| version::git_info())
.with_metadata("authenticated", Value::Bool(false))
.with_about("Display the githash of StartOS CLI"),
)
.subcommand(
"echo",
from_fn(echo::<InstallContext>)
.with_about("Echo a message")
.with_call_remote::<CliContext>(),
)
.subcommand(
"install",
os_install::install::<InstallContext>()
.with_about("Commands to list disk info, install StartOS, and reboot"),
)
}
pub fn expanded_api() -> ParentHandler<CliContext> {
main_api()
.subcommand(
"init",
from_fn_blocking(developer::init)
.no_display()
.with_about("Create developer key if it doesn't exist"),
)
.subcommand(
"pubkey",
from_fn_blocking(developer::pubkey)
.with_about("Get public key for developer private key"),
)
.subcommand(
"diagnostic",
diagnostic::diagnostic::<CliContext>()
.with_about("Commands to display logs, restart the server, etc"),
)
.subcommand("setup", setup::setup::<CliContext>())
.subcommand(
"install",
os_install::install::<CliContext>()
.with_about("Commands to list disk info, install StartOS, and reboot"),
)
.subcommand(
"registry",
registry::registry_api::<CliContext>().with_about("Commands related to the registry"),
)
}

Some files were not shown because too many files have changed in this diff Show More