Compare commits

...

372 Commits

Author SHA1 Message Date
Aiden McClelland
81a80baed8 ts bindings refactor 2025-11-13 15:19:59 -07:00
Aiden McClelland
60ee836d44 prevent gateways from getting stuck empty 2025-11-13 15:19:30 -07:00
Aiden McClelland
aeaad3c833 Merge branch 'next/major' of github.com:Start9Labs/start-os into feature/dev-qol 2025-11-10 14:29:20 -07:00
Matt Hill
ce97827c42 CA instead of leaf for StartTunnel (#3046)
* updated docs for CA instead of cert

* generate ca instead of self-signed in start-tunnel

* Fix formatting in START-TUNNEL.md installation instructions

* Fix formatting in START-TUNNEL.md

* fix infinite loop

* add success message to install

* hide loopback and bridge gateways

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-11-10 14:15:58 -07:00
Aiden McClelland
14fcb60670 wip: types 2025-11-10 09:45:30 -07:00
Aiden McClelland
f83df5682c bump sdk 2025-11-07 18:07:25 -07:00
Aiden McClelland
bfdab897ab misc fixes 2025-11-07 14:31:14 -07:00
Aiden McClelland
3efec07338 Include StartTunnel installation command
Added installation instructions for StartTunnel.
2025-11-07 20:00:31 +00:00
Aiden McClelland
29c97fcbb0 sdk fixes 2025-11-07 11:20:13 -07:00
Aiden McClelland
e7847d0e88 squashfs-wip 2025-11-07 03:13:54 -07:00
Aiden McClelland
68f401bfa3 Feature/start tunnel (#3037)
* fix live-build resolv.conf

* improved debuggability

* wip: start-tunnel

* fixes for trixie and tor

* non-free-firmware on trixie

* wip

* web server WIP

* wip: tls refactor

* FE patchdb, mocks, and most endpoints

* fix editing records and patch mocks

* refactor complete

* finish api

* build and formatter update

* minor change toi viewing addresses and fix build

* fixes

* more providers

* endpoint for getting config

* fix tests

* api fixes

* wip: separate port forward controller into parts

* simplify iptables rules

* bump sdk

* misc fixes

* predict next subnet and ip, use wan ips, and form validation

* refactor: break big components apart and address todos (#3043)

* refactor: break big components apart and address todos

* starttunnel readme, fix pf mocks, fix adding tor domain in startos

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* better tui

* tui tweaks

* fix: address comments

* better regex for subnet

* fixes

* better validation

* handle rpc errors

* build fixes

* fix: address comments (#3044)

* fix: address comments

* fix unread notification mocks

* fix row click for notification

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix raspi build

* fix build

* fix build

* fix build

* fix build

* try to fix build

* fix tests

* fix tests

* fix rsync tests

* delete useless effectful test

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
2025-11-07 10:12:05 +00:00
Matt Hill
1ea525feaa make textarea rows configurable (#3042)
* make textarea rows configurable

* add comments

* better defaults
2025-10-31 11:46:49 -06:00
Alex Inkin
57c4a7527e fix: make CPU meter not go to 11 (#3038) 2025-10-30 16:13:39 -06:00
Aiden McClelland
5aa9c045e1 fix live-build resolv.conf (#3035)
* fix live-build resolv.conf

* improved debuggability
2025-09-24 22:44:25 -06:00
Matt Hill
6f1900f3bb limit adding gateway to StartTunnel, better copy around Tor SSL (#3033)
* limit adding gateway to StartTunnel, better copy around Tor SSL

* properly differentiate ssl

* exclude disconnected gateways

* better error handling

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-09-24 13:22:26 -06:00
Aiden McClelland
bc62de795e bugfixes for alpha.10 (#3032)
* bugfixes for alpha.10

* bump raspi kernel

* rpi kernel bump

* alpha.11
2025-09-23 22:42:17 +00:00
Alex Inkin
c62ca4b183 fix: make long dropdown options wrap (#3031) 2025-09-21 06:06:33 -06:00
Alex Inkin
876e5bc683 fix: fix overflowing interface table (#3027) 2025-09-20 07:10:58 -06:00
Alex Inkin
b99f3b73cd fix: make logs page take up all space (#3030) 2025-09-20 07:10:28 -06:00
Matt Hill
7eecf29449 fix dep error display, show starting if any health check starting, show disabled health check message, remove loader from service list, animated dots, better color (#3025)
* refector addresses to not need gateways array

* fix dep error display, show starting if any health check starting, show disabled health check message, remove loader from service list, animated dots, better color

* fix: fix action results textfields

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-09-17 10:32:20 -06:00
Mariusz Kogen
1d331d7810 Fix file permissions for developer key and auth cookie (#3024)
* fix permissions

* include read for group
2025-09-16 09:09:33 -06:00
Aiden McClelland
68414678d8 sdk updates; beta.39 (#3022)
* sdk updates; beta.39

* beta.40
2025-09-11 15:47:48 -06:00
Aiden McClelland
2f6b9dac26 Bugfix/dns recursion (#3023)
* fix dns recursion and localhost

* additional fix
2025-09-11 15:47:38 -06:00
Aiden McClelland
d1812d875b fix dns recursion and localhost (#3021) 2025-09-11 12:35:12 -06:00
Aiden McClelland
723dea100f add more gateway info to hostnameInfo (#3019) 2025-09-10 12:16:35 -06:00
Matt Hill
c4419ed31f show correct gateway name when adding public domain 2025-09-10 09:57:39 -06:00
Matt Hill
754ab86e51 only show http for tor if protocol is http 2025-09-10 09:36:03 -06:00
Mariusz Kogen
04dab532cd Motd Redesign - Visual and Structural Upgrade (#3018)
New 040 motd
2025-09-10 06:36:27 +00:00
Matt Hill
add01ebc68 Gateways, domains, and new service interface (#3001)
* add support for inbound proxies

* backend changes

* fix file type

* proxy -> tunnel, implement backend apis

* wip start-tunneld

* add domains and gateways, remove routers, fix docs links

* dont show hidden actions

* show and test dns

* edit instead of chnage acme and change gateway

* refactor: domains page

* refactor: gateways page

* domains and acme refactor

* certificate authorities

* refactor public/private gateways

* fix fe types

* domains mostly finished

* refactor: add file control to form service

* add ip util to sdk

* domains api + migration

* start service interface page, WIP

* different options for clearnet domains

* refactor: styles for interfaces page

* minor

* better placeholder for no addresses

* start sorting addresses

* best address logic

* comments

* fix unnecessary export

* MVP of service interface page

* domains preferred

* fix: address comments

* only translations left

* wip: start-tunnel & fix build

* forms for adding domain, rework things based on new ideas

* fix: dns testing

* public domain, max width, descriptions for dns

* nix StartOS domains, implement public and private domains at interface scope

* restart tor instead of reset

* better icon for restart tor

* dns

* fix sort functions for public and private domains

* with todos

* update types

* clean up tech debt, bump dependencies

* revert to ts-rs v9

* fix all types

* fix dns form

* add missing translations

* it builds

* fix: comments (#3009)

* fix: comments

* undo default

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix: refactor legacy components (#3010)

* fix: comments

* fix: refactor legacy components

* remove default again

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* more translations

* wip

* fix deadlock

* coukd work

* simple renaming

* placeholder for empty service interfaces table

* honor hidden form values

* remove logs

* reason instead of description

* fix dns

* misc fixes

* implement toggling gateways for service interface

* fix showing dns records

* move status column in service list

* remove unnecessary truthy check

* refactor: refactor forms components and remove legacy Taiga UI package (#3012)

* handle wh file uploads

* wip: debugging tor

* socks5 proxy working

* refactor: fix multiple comments (#3013)

* refactor: fix multiple comments

* styling changes, add documentation to sidebar

* translations for dns page

* refactor: subtle colors

* rearrange service page

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix file_stream and remove non-terminating test

* clean  up logs

* support for sccache

* fix gha sccache

* more marketplace translations

* install wizard clarity

* stub hostnameInfo in migration

* fix address info after setup, fix styling on SI page, new 040 release notes

* remove tor logs from os

* misc fixes

* reset tor still not functioning...

* update ts

* minor styling and wording

* chore: some fixes (#3015)

* fix gateway renames

* different handling for public domains

* styling fixes

* whole navbar should not be clickable on service show page

* timeout getState request

* remove links from changelog

* misc fixes from pairing

* use custom name for gateway in more places

* fix dns parsing

* closes #3003

* closes #2999

* chore: some fixes (#3017)

* small copy change

* revert hardcoded error for testing

* dont require port forward if gateway is public

* use old wan ip when not available

* fix .const hanging on undefined

* fix test

* fix doc test

* fix renames

* update deps

* allow specifying dependency metadata directly

* temporarily make dependencies not cliackable in marketplace listings

* fix socks bind

* fix test

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: waterplea <alexander@inkin.ru>
2025-09-10 03:43:51 +00:00
Mariusz Kogen
1cc9a1a30b build(cli): harden build-cli.sh (zig check, env defaults, GIT_HASH) (#3016) 2025-09-09 14:18:52 -06:00
Dominion5254
92a1de7500 remove entire service package directory on hard uninstall (#3007)
* remove entire service package directory on hard uninstall

* fix package path
2025-08-12 15:46:01 -06:00
Alex Inkin
a6fedcff80 fix: extract correct manifest in updating state (#3004) 2025-08-01 22:51:38 -06:00
Alex Inkin
55eb999305 fix: update notifications design (#3000) 2025-07-29 22:17:59 -06:00
Aiden McClelland
377b7b12ce update/alpha.9 (#2988)
* import marketplac preview for sideload

* fix: improve state service (#2977)

* fix: fix sideload DI

* fix: update Angular

* fix: cleanup

* fix: fix version selection

* Bump node version to fix build for Angular

* misc fixes
- update node to v22
- fix chroot-and-upgrade access to prune-images
- don't self-migrate legacy packages
- #2985
- move dataVersion to volume folder
- remove "instructions.md" from s9pk
- add "docsUrl" to manifest

* version bump

* include flavor when clicking view listing from updates tab

* closes #2980

* fix: fix select button

* bring back ssh keys

* fix: drop 'portal' from all routes

* fix: implement longtap action to select table rows

* fix description for ssh page

* replace instructions with docsLink and refactor marketplace preview

* delete unused translations

* fix patchdb diffing algorithm

* continue refactor of marketplace lib show components

* Booting StartOS instead of Setting up your server on init

* misc fixes
- closes #2990
- closes #2987

* fix build

* docsUrl and clickable service headers

* don't cleanup after update until new service install succeeds

* update types

* misc fixes

* beta.35

* sdkversion, githash for sideload, correct logs for init, startos pubkey display

* bring back reboot button on install

* misc fixes

* beta.36

* better handling of setup and init for websocket errors

* reopen init and setup logs even on graceful closure

* better logging, misc fixes

* fix build

* dont let package stats hang

* dont show docsurl in marketplace if no docsurl

* re-add needs-config

* show error if init fails, shorten hover state on header icons

* fix operator precedemce

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Mariusz Kogen <k0gen@pm.me>
2025-07-18 18:31:12 +00:00
gStart9
ba2906a42e Fallback DNS/NTP server privacy enhancements (#2992) 2025-07-16 21:31:52 +00:00
gStart9
ee27f14be0 Update/kiosk mode firefox settings (#2973)
* Privacy settings for firefox's kiosk mode

* Re-add extensions.update.enabled, false

* Don't enable letterboxing by default
2025-07-15 20:17:52 +00:00
Aiden McClelland
46c8be63a7 0.4.0-alpha.8 (#2975) 2025-07-08 12:28:21 -06:00
Matt Hill
7ba66c419a Misc frontend fixes (#2974)
* fix dependency input warning and extra comma

* clean up buttons during install in marketplace preview

* chore: grayscale and closing action-bar

* fix prerelease precedence

* fix duplicate url for addSsl on ssl proto

* no warning for soft uninstall

* fix: stop logs from repeating disconnected status and add 1 second delay between reconnection attempts

* fix stop on reactivation of critical task

* fix: fix disconnected toast

* fix: updates styles

* fix: updates styles

* misc fixes

* beta.33

* fix updates badge and initialization of marketplace preview controls

---------

Co-authored-by: waterplea <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-07-08 12:08:27 -06:00
Aiden McClelland
340775a593 Feature/more dynamic unions (#2972)
* with validators

* more dynamic unions

* fixes from v31

* better constructor for dynamic unions

* version bump

* fix build
2025-07-01 17:40:39 -06:00
Alex Inkin
35d2ec8a44 chore: fix font in Safari (#2970) 2025-06-25 09:55:50 -04:00
Alex Inkin
2983b9950f chore: fix issues from dev channel (#2968) 2025-06-25 09:30:28 -04:00
Aiden McClelland
dbf08a6cf8 stop service if critical task activated (#2966)
filter out union lists instead of erroring
2025-06-18 22:03:27 +00:00
Alex Inkin
28f31be36f Fix/fe 6 (#2965)
* fix backup reports modal

* chore: fix comments

* chore: fix controls status

* chore: fix stale marketplace data

* slightly better registry switching

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-06-18 10:40:39 -06:00
Aiden McClelland
3ec4db0225 addHealthCheck instead of additionalHealthChecks for Daemons (#2962)
* addHealthCheck on Daemons

* fix bug that prevents domains without protocols from being deleted

* fixes from testing

* version bump

* add sdk version to UI

* fix useEntrypoint

* fix dependency health check error display

* minor fixes

* beta.29

* fixes from testing

* beta.30

* set /etc/os-release (#2918)

* remove check-monitor from kiosk (#2059)

* add units for progress (#2693)

* use new progress type

* alpha.7

* fix up pwa stuff

* fix wormhole-squashfs and prune boot (#2964)

* don't exit on expected errors

* use bash

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-06-17 17:50:01 -06:00
Matt Hill
f5688e077a misc fixes (#2961)
* fix backup reports modal

* chore: fix comments

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-06-17 11:22:32 -06:00
Aiden McClelland
2464d255d5 improve daemons init system (#2960)
* repeatable command launch fn

* allow js fn for daemon exec

* improve daemon init system

* fixes from testing
2025-06-06 14:35:03 -06:00
Aiden McClelland
586d950b8c update cargo deps (#2959)
* update cargo deps

* readd device info header
2025-06-05 17:17:02 -06:00
Matt Hill
e7469388cc minor fixes (#2957)
* minor fixes

* more minor touchups

* minor fix

* fix release notes display
2025-06-05 17:02:54 -06:00
Dominion5254
ab6ca8e16a Bugfix/ssl proxy to ssl (#2956)
* fix registry rm command

* fix bind with addSsl on ssl proto

* fix bind with addSsl on ssl proto

* Add pre-release version migrations

* fix os build

* add mime to package deps

* update lockfile

* more ssl fixes

* add waitFor

* improve restart lockup

* beta.26

* fix dependency health check logic

* handle missing health check

* fix port forwards

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-06-04 19:41:21 -06:00
Alex Inkin
02413a4fac Update Angular (#2952)
* fix Tor logs actually fetching od logs

* chore: update to Angular 18

* chore: update to Angular 19

* bump patchDB

* chore: update Angular

* chore: fix setup-wizard success page

* chore: fix

* chore: fix

* chore: fix

* chore: fix

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-05-30 10:34:24 -04:00
Dominion5254
05b8dd9ad8 fix registry rm command (#2955) 2025-05-27 19:00:29 -06:00
Dominion5254
29c9419a6e add nfs-common (#2954) 2025-05-27 16:11:34 -06:00
Aiden McClelland
90e61989a4 misc bugfixes for alpha.4 (#2953)
* fix lockup when stop during init

* Fix incorrect description for registry package remove command

* alpha.5

* beta.25

---------

Co-authored-by: Mariusz Kogen <k0gen@pm.me>
2025-05-23 11:23:29 -06:00
Matt Hill
b1f9f90fec Frontend fixes/improvements (#2950)
* fix Tor logs actually fetching od logs

* chore: switch from `mime-types` to `mime` for browser environment support (#2951)

* change V2 s9pk title to Legacy

* show warning for domains when not public, disable launch too

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Mariusz Kogen <k0gen@pm.me>
2025-05-23 10:45:06 -06:00
Matt Hill
b40849f672 Fix/fe bugs 3 (#2943)
* fix typeo in patch db seed

* show all registries in updates tab, fix required dependnecy display in marketplace, update browser tab title desc

* always show pointer for version select

* chore: fix comments

* support html in action desc and marketplace long desc, only show qr in action res if qr is true

* disable save if smtp creds not edited, show better smtp success message

* dont dismiss login spinner until patchDB returns

* feat: redesign of service dashboard and interface (#2946)

* feat: redesign of service dashboard and interface

* chore: comments

* re-add setup complete

* dibale launch UI when not running, re-style things, rename things

* back to 1000

* fix clearnet docs link and require password retype in setup wiz

* faster hint display

* display dependency ID if title not available

* fix migration

* better init progress view

* fix setup success page by providing VERSION and notifications page fixes

* force uninstall from service error page, soft or hard

* handle error state better

* chore: fixed for install and setup wizards

* chore: fix issues (#2949)

* enable and disable kiosk mode

* minor fixes

* fix dependency mounts

* dismissable tasks

* provide replayId

* default if health check success message is null

* look for wifi interface too

* dash for null user agent in sessions

* add disk repair to diagnostic api

---------

Co-authored-by: waterplea <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-05-21 19:04:26 -06:00
Aiden McClelland
44560c8da8 Refactor/sdk init (#2947)
* fixes for main

* refactor package initialization

* fixes from testing

* more fixes

* beta.21

* do not use instanceof

* closes #2921

* beta22

* allow disabling kiosk

* migration

* fix /etc/shadow

* actionRequest -> task

* beta.23
2025-05-21 10:24:37 -06:00
Aiden McClelland
46fd01c264 0.4.0-alpha.4 (#2948) 2025-05-20 16:11:50 -06:00
nbxl21
100695c262 Add french translation (#2945)
* Add french translation

* Remove outdated instruction

* Fix missing instructions
2025-05-17 18:10:24 -06:00
H0mer
54b5a4ae55 Update pl.ts (#2944)
Collaboration with @k0gen
2025-05-14 08:07:56 -06:00
Aiden McClelland
ffb252962b hotfix migration 2025-05-12 06:38:48 -06:00
Aiden McClelland
ae31270e63 alpha3 (#2942) 2025-05-11 07:48:32 -06:00
Matt Hill
9b2b54d585 fix osUpdate check and address parser for Tor without protocol (#2941) 2025-05-10 12:18:20 -06:00
Aiden McClelland
e1ccc583a3 0.4.0-alpha.2 (#2940) 2025-05-09 16:34:29 -06:00
Aiden McClelland
7750e33f82 misc sdk changes (#2934)
* misc sdk changes

* delete the store ☠️

* port comments

* fix build

* fix removing

* fix tests

* beta.20

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-05-09 15:10:51 -06:00
Matt Hill
d2c4741f0b use fallback icon for missing dep 2025-05-09 15:07:49 -06:00
Matt Hill
c79c4f6bde remove http timeout for sideloading 2025-05-09 14:55:11 -06:00
Lucy
3849d0d1a9 Fix/controls (#2938)
* adjust copy to display package state

* use package id for service column in notifications table

* fixes

* less translations, fix notificaiton item, remove unnecessary conditional

* tidy up

* forgot spanish

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-05-09 14:53:46 -06:00
Alex Inkin
8bd71ccd5e fix: welcome page on mobile (#2936) 2025-05-09 14:49:19 -06:00
Matt Hill
b731f7fb64 handle removing and backing up state, fix ackInstructions too (#2935) 2025-05-09 14:48:40 -06:00
Matt Hill
cd554f77f3 unbang the test bang 2025-05-09 11:06:53 -06:00
Matt Hill
8c977c51ca frontend fixes for alpha.2 (#2919)
* rmeove icon from toggles

* fix: comments

* fix: more comments

* always show public domains even if interface private, only show delete on domains

* fix: even more comments

* fix: last comments

* feat: empty state for dashboard

* rework welcome, dlete update-toast, minor

* translation improvements

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2025-05-09 10:29:17 -06:00
Aiden McClelland
a3252f9671 allow mounting files directly (#2931)
* allow mounting files directly

* fixes from testing

* more fixes
2025-05-07 12:47:45 -06:00
Aiden McClelland
9bc945f76f upcast v0 action results to v1 (#2930) 2025-05-07 09:33:26 -06:00
Aiden McClelland
f6b4dfffb6 fix shell support in package attach (#2929)
* fix package attach

* final fixes

* apply changes to launch

* Update core/startos/src/service/effects/subcontainer/sync.rs
2025-05-07 09:19:38 -06:00
Aiden McClelland
68955c29cb add transformers to file helpers (#2922)
* fix undefined handling in INI

* beta.14

* Partial -> DeepPartial in action request

* boolean laziness kills

* beta.16

* misc fixes

* file transformers

* infer validator source argument

* simplify validator

* readd toml

* beta.17

* filter undefined instead of parse/stringify

* handle arrays of objects in filterUndefined
2025-05-06 11:04:11 -06:00
Matt Hill
97e4d036dc fix adding new registry 2025-05-03 20:35:13 -06:00
Matt Hill
0f49f54c29 dont show indeterminate progress when waiting for phase 2025-05-01 18:54:14 -06:00
Aiden McClelland
828e13adbb add support for "oneshot" daemons (#2917)
* add support for "oneshot" daemons

* add docs for oneshot

* add support for runAsInit in daemon.of

* beta.13
2025-05-01 16:00:35 -06:00
Matt Hill
e6f0067728 rework installing page and add cancel install button (#2915)
* rework installing page and add cancel install button

* actually call cancel endpoint

* fix two bugs

* include translations in progress component

* cancellable installs

* fix: comments (#2916)

* fix: comments

* delete comments

* ensure trailing slash and no qp for new registry url

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* fix raspi

* bump sdk

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
2025-04-30 13:50:08 -06:00
Lucy
5c473eb9cc update marketplace url to reflect build version (#2914)
* update marketplace url to reflect build version

* adjust marketplace config

* use helper function to compare urls

* rework some registry stuff

* #2900, #2899, and other registry changes

* alpha.1

* trailing /

* add startosRegistry

* fix migration

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-04-29 14:12:21 -06:00
Aiden McClelland
2adf34fbaf misc fixes (#2892)
* use docker for build steps that require linux when not on linux

* use fuse for overlay

* quiet mountpoint

* node 22

* misc fixes

* make shasum more compliant

* optimize download-base-image.sh with cleaner url handling and checksum verification

* fix script

* fixes #2900

* bump node and npm versions in web readme

* Minor pl.ts fixes

* fixes in response to synapse issues

* beta.8

* update ts-matches

* beta.11

* pl.ts finetuning

---------

Co-authored-by: Mariusz Kogen <k0gen@pm.me>
Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-04-28 17:33:41 -06:00
Matt Hill
05dd760388 Fix links for docs (#2908)
* fix docs paths

* docsLink directive

* fix: bugs (#2909)

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>
2025-04-24 14:14:08 -06:00
Mariusz Kogen
2cf4864078 Polish language refactor (#2887)
* Refactored Polish Translation

* even more language fixes and no comments
2025-04-23 10:47:43 -06:00
Lucy
df4c92672f Fix/os update version (#2890)
* dynamically set a registry to use for os updates

* fix os updates response type

* fix saving high score

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-04-23 07:17:46 -06:00
Alex Inkin
5b173315f9 fix: store language properly (#2891) 2025-04-23 07:17:18 -06:00
Aiden McClelland
c85ea7d8fa sdk beta.6 (#2885)
beta.6
2025-04-22 12:00:34 -06:00
Alex Inkin
113154702f fix: fix logs overflow (#2888) 2025-04-22 08:58:20 -06:00
Lucy
33ae46f76a add proxima nova font (#2883)
* add proxima nova font

* update to woff files

* clean up defaults
2025-04-21 11:30:38 -06:00
Matt Hill
27272680a2 Bugfix/040 UI (#2881)
* fix sideload and install flow

* move updates chevron inside upddate button

* update dictionaries to include langauge names

* fix: address todos (#2880)

* fix: address todos

* fix enlgish translation

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>

* use existing translation, no need to duplicate

* fix: update dialog and other fixes (#2882)

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-04-21 10:57:12 -06:00
Matt Hill
b1621f6b34 Copy changes for 040 release (#2874)
* update 040 changelog

* remove post_up from 036-alpha6

* backend copy updates

* beta.4

* beta.5

* fix spelling

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-04-21 09:43:35 -06:00
Aiden McClelland
2c65033c0a Feature/sdk improvements (#2879)
* sdk improvements

* subcontainer fixes, disable wifi on migration if not in use, filterable interfaces
2025-04-18 14:11:13 -06:00
Matt Hill
dcfbaa9243 fix start bug in service dashboard 2025-04-17 21:29:08 -06:00
Matt Hill
accef65ede bug fixes (#2878)
* bug fixes, wip

* revert tranlstion
2025-04-17 21:06:50 -06:00
Alex Inkin
50755d8ba3 Refactor i18n approach (#2875)
* Refactor i18n approach

* chore: move to shared

* chore: add default

* create DialogService and update LoadingService (#2876)

* complete translation infra for ui project, currently broken

* cleanup and more dictionaries

* chore: fix

---------

Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-04-17 09:00:59 -06:00
Aiden McClelland
47b6509f70 sdk improvements (#2877) 2025-04-16 12:53:10 -06:00
Aiden McClelland
89f3fdc05f reduce task leaking (#2868)
* reduce task leaking

* fix onLeaveContext
2025-04-16 11:00:46 -06:00
Matt Hill
03f8b73627 minor web cleanup chores 2025-04-13 13:03:51 -06:00
Matt Hill
2e6e9635c3 fix a few, more to go (#2869)
* fix a few, more to go

* chore: comments (#2871)

* chore: comments

* chore: typo

* chore: stricter typescript (#2872)

* chore: comments

* chore: stricter typescript

---------

Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* minor styling

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>
2025-04-12 09:53:03 -06:00
Aiden McClelland
6a312e3fdd Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2025-04-11 13:03:55 -06:00
Aiden McClelland
0e8961efe3 Merge branch 'master' into next/major 2025-04-10 15:50:29 -06:00
Matt Hill
fc2be42418 sideload wip, websockets, styling, multiple todos (#2865)
* sideload wip, websockets, styling, multiple todos

* sideload

* misc backend updates

* chore: comments

* prep for license and instructions display

* comment for Matt

* s9pk updates and 040 sdk

* fix dependency error for actions

* 0.4.0-beta.1

* beta.2

---------

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: waterplea <alexander@inkin.ru>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2025-04-10 19:51:05 +00:00
Aiden McClelland
ab4336cfd7 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2025-04-07 14:00:42 -06:00
w3irdrobot
63a29d3a4a update github workflow actions to most recent versions (#2852) 2025-04-07 19:53:23 +00:00
Alex Inkin
31856d9895 chore: comments (#2863) 2025-04-06 07:18:01 -06:00
Alex Inkin
f51dcf23d6 feat: refactor metrics (#2861) 2025-03-31 14:22:54 -06:00
Alex Inkin
1883c9666e feat: refactor updates (#2860) 2025-03-29 12:50:23 -06:00
Matt Hill
4b4cf76641 remove ssh, deprecate wifi (#2859) 2025-03-28 14:38:49 -06:00
Alex Inkin
495bbecc01 feat: refactor logs (#2856)
* feat: refactor logs

* download root ca, minor transaltions, better interfaces buttons

* chore: comments

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-03-28 07:02:06 -06:00
Alex Inkin
e6af7e9885 feat: finalize desktop and mobile design of system routes (#2855)
* feat: finalize desktop and mobile design of system routes

* clean up messaging and mobile tabbar utilities

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-03-27 06:41:47 -06:00
w3irdrobot
182b8c2283 update time dependency in core (#2850) 2025-03-25 16:49:50 +00:00
Alex Inkin
5318cccc5f feat: add i18n infrastructure (#2854)
* feat: add i18n infrastructure

* store langauge selection to patchDB ui section

* feat: react to patchdb language change

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-03-25 05:30:35 -06:00
Alex Inkin
99739575d4 chore: refactor system settings routes (#2853)
* chore: refactor system settings routes

* switch mock to null to see backup page

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-03-23 11:03:41 -06:00
Aiden McClelland
6f9069a4fb Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2025-03-17 14:00:49 -06:00
Alex Inkin
a18ab7f1e9 chore: refactor interfaces (#2849)
* chore: refactor interfaces

* chore: fix uptime
2025-03-17 12:07:52 -06:00
Alex Inkin
be0371fb11 chore: refactor settings (#2846)
* small type changes and clear todos

* handle notifications and metrics

* wip

* fixes

* migration

* dedup all urls

* better handling of clearnet ips

* add rfkill dependency

* chore: refactor settings

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
2025-03-10 13:09:08 -06:00
Matt Hill
fa3329abf2 remove duplicate dir with pages 2025-03-07 13:18:08 -07:00
Aiden McClelland
e830fade06 Update/040 types (#2845)
* small type changes and clear todos

* handle notifications and metrics

* wip

* fixes

* migration

* dedup all urls

* better handling of clearnet ips

* add rfkill dependency

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-03-06 20:36:19 -07:00
Alex Inkin
ac392dcb96 feat: more refactors (#2844) 2025-03-05 13:30:07 -07:00
Aiden McClelland
00a5fdf491 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2025-03-03 12:51:40 -07:00
Alex Inkin
7fff9579c0 feat: redesign service route (#2835)
* feat: redesign service route

* chore: more changes

* remove automated backups and fix interface addresses

* fix rpc methods and slightly better mocks

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2025-02-25 08:33:35 -07:00
Alex Inkin
1b006599cf feat: add service uptime and start style changes (#2831) 2025-02-14 11:32:30 -07:00
Matt Hill
ce2842d365 fix patch db types and comment out future domain and proxy features 2025-02-11 21:17:20 -07:00
Matt Hill
7d1096dbd8 compiles 2025-02-10 22:41:29 -07:00
Matt Hill
95722802dc Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2025-02-08 19:19:35 -07:00
Alex Inkin
95cad7bdd9 fix: properly handle values in unions (#2826) 2025-02-08 17:36:32 -07:00
Alex Inkin
b2b98643d8 feat: better form array validation (#2822) 2025-01-27 17:46:29 -07:00
Alex Inkin
bb8109f67d fix: fix resetting config and other minor issues (#2819) 2025-01-24 11:12:57 -07:00
Alex Inkin
e6f02bf8f7 feat: hover state for navigation (#2807)
* feat: hover state for navigation

* chore: fix
2025-01-06 11:56:16 -07:00
Alex Inkin
57e75e3614 feat: implement top navigation (#2805)
* feat: implement top navigation

* chore: fix order
2024-12-30 09:07:44 -07:00
Alex Inkin
89ab67e067 fix: finish porting minor changes to major (#2799) 2024-12-11 16:16:46 -07:00
Matt Hill
115c599fd8 remove welcome component 2024-12-02 17:00:40 -07:00
Matt Hill
3121c08ee8 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2024-12-02 16:59:03 -07:00
Matt Hill
a5bac39196 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2024-12-02 16:50:37 -07:00
Alex Inkin
9f640b24b3 fix: fix building UI project (#2794)
* fix: fix building UI project

* fix makefile

* inputspec instead of config

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2024-12-02 16:44:27 -07:00
Matt Hill
75e7556bfa Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2024-11-25 19:02:07 -07:00
Alex Inkin
beb3a9f60a feat: make favicon react to theme (#2787) 2024-11-12 19:29:41 -07:00
Lucy
dfda2f7d5d Update Marketplace (#2742)
* update abstract marketplace for usage accuracy andrename store to registry

* use new abstract functions

* fix(marketplace): get rid of `AbstractMarketplaceService`

* bump shared marketplace lib

* update marketplace to use query params for registry url; comment out updates page - will be refactored

* cleanup

* cleanup duplicate

* cleanup unused imports

* rework setting registry url when loading marketplace

* cleanup marketplace service

* fix background

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
Co-authored-by: waterplea <alexander@inkin.ru>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
2024-10-09 11:23:08 -06:00
Matt Hill
a77ebd3b55 Merge pull request #2747 from Start9Labs/chore/remove-mastodon-from-readme
Update README.md
2024-09-26 11:04:19 -06:00
Aiden McClelland
00114287e5 Update README.md 2024-09-26 17:00:52 +00:00
Matt Hill
a9569d0ed9 Merge pull request #2740 from Start9Labs/update/new-registry
Update/new registry
2024-09-20 09:52:16 -06:00
Lucy Cifferello
88d9388be2 remote package lock from root 2024-09-20 11:51:19 -04:00
Lucy Cifferello
93c72ecea5 adjust buttons on marketplace show page and bump marketplace lib 2024-09-20 11:42:12 -04:00
Lucy Cifferello
b5b0ac50bd update shared and marketplace libs for taiga dep 2024-09-20 11:27:40 -04:00
Lucy Cifferello
4d2afdb1a9 misc cleanup and bump marketplace lib 2024-09-19 13:37:34 -04:00
Lucy Cifferello
39a177bd70 misc copy 2024-09-19 13:19:24 -04:00
Lucy Cifferello
34fb6ac837 add bitcoind dep for proxy in mocks 2024-09-19 13:19:14 -04:00
Lucy Cifferello
f868a454d9 move registry component to shared marketplace lib 2024-09-19 13:18:16 -04:00
Lucy Cifferello
751ceab04e fix icons and flavor filtering 2024-09-12 17:48:57 -04:00
Matt Hill
b6c48d0f98 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2024-08-28 15:49:28 -06:00
Matt Hill
097d77f7b3 Merge pull request #2727 from Start9Labs/final-fixes
fix: final fixes
2024-08-28 15:38:44 -06:00
waterplea
7a0586684b fix: refresh edit target 2024-08-27 12:27:23 +04:00
waterplea
8f34d1c555 fix: final fixes 2024-08-27 12:17:05 +04:00
Matt Hill
5270a6781f Merge pull request #2719 from Start9Labs/flavor
fix: implement flavor across the app
2024-08-20 22:09:07 -06:00
waterplea
fa93e195cb fix: implement flavor across the app 2024-08-20 18:53:55 +04:00
Matt Hill
befa9eb16d Merge pull request #2714 from Start9Labs/sideload-and-backups
fix: implement back sideload and server selection in restoring
2024-08-18 07:52:02 -06:00
waterplea
a278c630bb fix: implement back sideload and server selection in restoring 2024-08-17 16:44:33 +04:00
Matt Hill
76eb0f1775 Merge pull request #2709 from Start9Labs/fix-build
fix: fix build after minor merged into major
2024-08-16 05:28:33 -06:00
waterplea
0abe08f243 chore: small changes 2024-08-16 14:39:54 +04:00
Matt Hill
015131f198 address comments and more 2024-08-15 08:05:37 -06:00
waterplea
a730543c76 fix: fix build after minor merged into major 2024-08-15 12:40:49 +04:00
Matt Hill
b43ad93c54 Merge pull request #2706 from Start9Labs/setup-wizard
fix: fix merge issues for setup-wizard project
2024-08-10 15:04:01 -06:00
waterplea
7850681ce1 chore: add back logs 2024-08-11 00:41:26 +04:00
Matt Hill
846189b15b address comments 2024-08-10 05:57:33 -06:00
waterplea
657aac0d68 fix: fix merge issues for setup-wizard project 2024-08-10 14:45:50 +04:00
Matt Hill
81932c8cff Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2024-08-08 10:52:49 -06:00
Matt Hill
20f6a5e797 Merge pull request #2687 from Start9Labs/taiga
chore: update taiga
2024-07-29 11:39:56 -06:00
waterplea
949f1c648a chore: update taiga 2024-07-29 18:18:00 +04:00
Matt Hill
d159dde2ca Merge pull request #2676 from Start9Labs/wifi
chore: improve wifi icons
2024-07-19 07:33:45 -06:00
waterplea
729a510c5b chore: improve wifi icons 2024-07-19 12:25:50 +05:00
Matt Hill
fffc7f4098 Merge pull request #2672 from Start9Labs/taiga-4
feat: update Taiga UI to 4 release candidate
2024-07-15 11:40:16 -06:00
waterplea
c7a2e7ada1 feat: update Taiga UI to 4 release candidate 2024-07-15 11:16:19 +05:00
Matt Hill
a2b1968d6e Merge pull request #2632 from Start9Labs/tables
feat: add mobile view for all the tables
2024-07-10 11:49:24 -06:00
waterplea
398eb13a7f chore: remove test 2024-07-10 16:01:57 +05:00
waterplea
956c8a8e03 chore: more comments 2024-07-10 10:35:21 +05:00
waterplea
6aba166c82 chore: address comments 2024-07-09 13:45:04 +05:00
waterplea
fd7c7ea6b7 chore: compact cards 2024-07-05 18:27:40 +05:00
waterplea
d85e621bb3 chore: address comments 2024-07-05 18:23:18 +05:00
Matt Hill
25801f374c Merge pull request #2638 from Start9Labs/update/marketplace-for-brochure
misc fixes and backwards compatibility with new registry types for brochure
2024-06-28 11:52:06 -06:00
Lucy Cifferello
8fd2d0b35c Merge remote-tracking branch 'origin' into update/marketplace-for-brochure 2024-06-26 16:29:15 -04:00
Lucy Cifferello
dd196c0e11 remove mismatched type for now 2024-06-25 18:00:56 -04:00
Lucy Cifferello
6e2cf8bb3f fix version sorting and icon display for marketplace 2024-06-24 17:49:57 -04:00
Lucy Cifferello
b8eb8a90a5 fix hero icon 2024-06-07 14:59:43 -04:00
Lucy Cifferello
bd4d89fc21 fixes and backwards compatability with new registry types 2024-06-06 16:57:51 -04:00
waterplea
6234391229 feat: add mobile view for all the tables
Signed-off-by: waterplea <alexander@inkin.ru>
2024-06-03 15:10:49 +05:00
Matt Hill
206c185a3b Merge pull request #2631 from Start9Labs/sections
chore: add sections
2024-05-30 06:29:02 -06:00
waterplea
7689cbbe0d chore: add sections
Signed-off-by: waterplea <alexander@inkin.ru>
2024-05-30 12:04:38 +01:00
Matt Hill
b57a9351b3 Merge pull request #2629 from Start9Labs/navigation
refactor: change navigation
2024-05-28 21:18:44 -06:00
waterplea
f0ae9e21ae refactor: change navigation
Signed-off-by: waterplea <alexander@inkin.ru>
2024-05-28 13:20:25 +01:00
Matt Hill
9510c92288 Merge pull request #2626 from Start9Labs/comments
chore: address comments
2024-05-21 15:04:13 -06:00
waterplea
755f3f05d8 chore: input-file on mobile 2024-05-21 10:29:29 +01:00
waterplea
5d8114b475 chore: address comments 2024-05-20 21:59:46 +01:00
Matt Hill
85b39ecf99 Merge pull request #2621 from Start9Labs/paths
chore: types imports
2024-05-16 07:03:56 -06:00
waterplea
230838c22b chore: types imports 2024-05-16 12:28:59 +01:00
Matt Hill
a7bfcdcb01 Merge pull request #2618 from Start9Labs/service
refactor: change service page to the new design
2024-05-14 10:01:27 -06:00
Matt Hill
47ff630c55 fix dependency errors and navigation 2024-05-14 07:02:21 -06:00
waterplea
70dc53bda7 chore: add backups info 2024-05-14 11:34:14 +01:00
waterplea
7e1b433c17 refactor: change service page to the new design 2024-05-12 16:53:01 +01:00
Matt Hill
ec878defab Merge pull request #2604 from Start9Labs/metrics
feat: implement metrics on the dashboard
2024-04-26 09:07:45 -06:00
waterplea
1786b70e14 chore: add todo 2024-04-26 18:06:19 +03:00
waterplea
7f525fa7dc chore: fix comments 2024-04-26 13:17:25 +03:00
waterplea
8b89e03999 feat: implement metrics on the dashboard 2024-04-18 13:45:35 +07:00
Matt Hill
2693b9a42d Merge pull request #2598 from Start9Labs/fix/preview-loader
fix preview loader
2024-04-09 20:03:43 -06:00
Matt Hill
6b336b7b2f Merge pull request #2591 from Start9Labs/ionic
refactor: completely remove ionic
2024-04-09 14:28:24 -06:00
Lucy Cifferello
3c0e77241d fix preview loader 2024-04-09 13:29:54 -04:00
Lucy Cifferello
87461c7f72 fix gap space 2024-04-09 12:42:28 -04:00
Lucy Cifferello
a67f2b4976 add back variables 2024-04-09 12:42:11 -04:00
waterplea
8594781780 refactor: completely remove ionic 2024-04-09 12:45:47 +07:00
Matt Hill
b2c8907635 Merge pull request #2590 from Start9Labs/update/marketplace-shared
update marketplace/shared to sync with next/major and brochure
2024-04-08 11:46:02 -06:00
Lucy Cifferello
05f4df1a30 fix changig registry 2024-04-08 10:33:02 -04:00
Lucy Cifferello
35fe06a892 fix registry connect styles and remove view complete listing link 2024-04-05 16:56:06 -04:00
Lucy Cifferello
cd933ce6e4 fix sideload display and other misc style adjustments 2024-04-05 14:14:23 -04:00
Lucy Cifferello
0b93988450 feedback fixes 2024-04-05 09:26:32 -04:00
Lucy Cifferello
12a323f691 fix dependency icon and layout 2024-04-03 12:35:27 -04:00
Lucy Cifferello
9c4c211233 fix preview versions, icons, and misc styling 2024-04-03 12:34:46 -04:00
Lucy Cifferello
74ba68ff2c fix scroll areas and side menu 2024-04-03 12:32:57 -04:00
Lucy Cifferello
7273b37c16 cleanup dep item for new type 2024-04-01 18:10:16 -04:00
Lucy Cifferello
0d4ebffc0e marketplace release fixes and refactors 2024-04-01 18:09:28 -04:00
Matt Hill
352b2fb4e7 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2024-04-01 15:19:33 -06:00
Matt Hill
6e6ef57303 Merge pull request #2339 from Start9Labs/rebase/feat/domains
Most things FE 040
2024-04-01 15:12:02 -06:00
Matt Hill
b80e41503f Merge branch 'next/minor' of github.com:Start9Labs/start-os into rebase/feat/domains 2024-03-30 21:14:53 -06:00
Alex Inkin
7f28fc17ca feat: add mobile view for dashboard (#2581) 2024-03-30 08:48:36 -06:00
Matt Hill
70d4a0c022 Merge branch 'integration/new-container-runtime' of github.com:Start9Labs/start-os into rebase/feat/domains 2024-03-27 10:35:42 -06:00
Alex Inkin
8cfd994170 fix: address todos (#2578) 2024-03-27 10:27:51 -06:00
Matt Hill
641e829e3f better interfaces abstractions 2024-03-26 16:37:06 -06:00
Matt Hill
d202cb731d fix bugs 2024-03-25 13:31:30 -06:00
J H
4ab7300376 fix: Bringing in a building for the browser 2024-03-25 11:07:59 -06:00
Matt Hill
18cc5e0ee8 re-add ionic config 2024-03-25 10:28:49 -06:00
Matt Hill
af0cda5dbf update sdk imports 2024-03-24 13:27:13 -06:00
Matt Hill
a730a3719b Merge branch 'integration/new-container-runtime' of github.com:Start9Labs/start-os into rebase/feat/domains 2024-03-24 12:19:37 -06:00
Matt Hill
3b669193f6 refactor downstream for 036 changes (#2577)
refactor codebase for 036 changes
2024-03-24 12:12:55 -06:00
Matt Hill
22cd2e3337 Merge branch 'update/camelCase' of github.com:Start9Labs/start-os into rebase/feat/domains 2024-03-22 13:09:11 -06:00
Matt Hill
7e9d453a2c switch all fe to camelCase 2024-03-22 12:05:41 -06:00
Matt Hill
a4338b0d03 Merge branch 'integration/new-container-runtime' of github.com:Start9Labs/start-os into rebase/feat/domains 2024-03-22 10:10:55 -06:00
Lucy
2021431e2f Update/marketplace (#2575)
* make category link generic

* fix ai category display and svg icons

* fix markdown display and ansi module; cleanup

* convert tailwindcss to scss in marketplace menu component

* convert tailwindcss to scss in marketplace categories component

* convert tailwindcss to scss in marketplace item component

* update launch icon to taiga icon

* convert tailwindcss to scss in marketplace search component + cleanup

* convert tailwindcss to scss in marketplace release notes component + cleanup

* convert tailwindcss to scss in marketplace about component + cleanup

* convert tailwindcss to scss in marketplace additional component

* convert tailwindcss to scss in marketplace dependencies component + misc style fixes

* convert tailwindcss to scss in marketplace hero component + misc style fixes

* convert tailwindcss to scss in marketplace screenshots component

* convert tailwindcss to scss in portal marketplace components

* remove the rest of tailwindscss and fix reset styles

* bump shared and marketplace package versions

* misc style + build fixes

* sync package lock

* fix markdown + cleanup

* fix markdown margins and git hash size

* fix mobile zindex for hero and mobile changing categories routing link
2024-03-21 12:23:28 -06:00
Matt Hill
5e6a7e134f merge 036, everything broken 2024-03-20 13:32:57 -06:00
Alex Inkin
f4fadd366e feat: add new dashboard (#2574)
* feat: add new dashboard

* chore: comments

* fix duplicate

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2024-03-19 08:56:16 -06:00
Alex Inkin
a5b1b4e103 refactor: remove ionic from remaining places (#2565) 2024-02-27 13:15:32 -07:00
Alex Inkin
7b41b295b7 chore: refactor install and setup wizards (#2561)
* chore: refactor install and setup wizards

* chore: return tui-root
2024-02-22 06:58:01 -07:00
Matt Hill
69d5f521a5 remove tor http warnings 2024-02-16 14:12:10 -07:00
Alex Inkin
c0a55142b5 chore: refactor interfaces and remove UI routes (#2560) 2024-02-16 13:45:30 -07:00
Alex Inkin
513fb3428a feat: implement mobile header (#2559)
* feat: implement mobile header

* chore: remove remaining ties to old ui project

* chore: remove ionic from login page

* chore: address comments
2024-02-13 09:03:09 -07:00
Alex Inkin
9a0ae549f6 feat: refactor logs (#2555)
* feat: refactor logs

* chore: comments

* feat: add system logs

* feat: update shared logs
2024-02-05 19:26:00 -07:00
Lucy
4410d7f195 update angular in shared (#2556) 2024-01-31 11:25:25 -05:00
Alex Inkin
92aa70182d refactor: implement breadcrumbs (#2552) 2024-01-22 21:32:11 -05:00
Alex Inkin
90f5864f1e refactor: finalize new portal (#2543) 2023-12-22 16:22:16 -05:00
Alex Inkin
e47f126bd5 feat(portal): refactor marketplace for new portal (#2539)
* feat(portal): refactor marketplace for new portal

* fix background position

* chore: refactor sidebar

* chore: small fix

---------

Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2023-12-18 16:43:59 -05:00
Alex Inkin
ea6f70e3c5 chore: migrate to Angular 17 (#2538)
* chore: migrate to Angular 17

* chore: update
2023-12-11 07:06:55 -07:00
Lucy
0469aab433 Feature/marketplace redesign (#2395)
* wip

* update marketplace categories styling

* update logo icons

* add sort pipe

* update search component styling

* clean up categories component

* cleanup and remove unnecessary sort pipe

* query packages in selected category

* fix search styling

* add reg icon and font, adjust category styles

* fix build from rebasing integration/refactors

* adjust marketplace types for icon with store data, plus formatting

* formatting

* update categories and search

* hover styling for categories

* category styling

* refactor for category as a behavior subject

* more category styling

* base functionality with new marketplace components

* styling cleanup

* misc style fixes and fix category selection from package page

* fixes from review feedback

* add and style additional details

* implement release notes modal

* fix menu when on service show page mobile to display change marketplace

* style and responsiveness fixes

* rename header to sidebar

* input icon config to sidebar

* add mime type pipe and type fn

* review feedback fixes

* skeleton text, more abstraction

* reorder categories, clean up a little

* audit sidebar, categories, store-icon, marketplace-sidebar, search

* finish code cleanup and fix few bugs

* misc fixes and cleanup

* fix broken styles and markdown

* bump shared marketplace version

* more cleanup

* sync package lock

* rename sidebar component to menu

* wip preview sidebar

* sync package lock

* breakout package show elements into components

* link to brochure in preview; custom taiga button styles

* move marketplace preview component into ui; open preview when viewing service in marketplace

* sync changes post file struture rename

* further cleanup

* create service for sidebar toggle and cleanup marketplace components

* bump shared marketplace version

* bump shared for new images needed for brochure marketplace

* cleanup

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2023-12-08 13:12:38 -07:00
Alex Inkin
ad13b5eb4e feat(portal): refactor settings (#2536)
* feat(portal): refactor settings

* chore: refactor

* small updates

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2023-12-08 12:19:33 -07:00
Alex Inkin
7324a4973f feat(portal): add notifications sidebar (#2516)
* feat(portal): add notifications sidebar

* chore: add service

* chore: simplify style

* chore: fix comments

* WIP, moving notifications to patch-db

* revamp notifications

* chore: small adjustments

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2023-12-08 09:12:03 -07:00
Matt Hill
8bc93d23b2 Merge branch 'next/major' of github.com:Start9Labs/start-os into rebase/feat/domains 2023-11-21 21:18:27 -07:00
Matt Hill
c708b685e1 fix type error 2023-11-20 15:30:53 -07:00
Aiden McClelland
cbde91744f Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2023-11-20 13:21:04 -07:00
Aiden McClelland
147e24204b Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2023-11-16 15:15:43 -07:00
Matt Hill
13c50e428f Merge branch 'next/major' of github.com:Start9Labs/start-os into rebase/feat/domains 2023-11-13 17:13:32 -07:00
Matt Hill
8403ccd3da fix ts errors 2023-11-13 16:58:55 -07:00
Aiden McClelland
e92bd61545 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2023-11-13 16:36:59 -07:00
Aiden McClelland
8215e0221a Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2023-11-13 16:28:28 -07:00
Aiden McClelland
4b44d6fb83 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2023-11-13 16:25:24 -07:00
Matt Hill
0ae3e83ce4 fix makefile 2023-11-13 16:21:14 -07:00
Matt Hill
f4b573379d Merge branch 'next/major' of github.com:Start9Labs/start-os into rebase/feat/domains 2023-11-13 16:14:31 -07:00
Matt Hill
862ca375ee rename frontend to web 2023-11-13 15:59:16 -07:00
Aiden McClelland
530de6741b Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2023-11-13 15:41:58 -07:00
Aiden McClelland
35c1ff9014 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2023-11-13 14:59:16 -07:00
Aiden McClelland
3f4caed922 Merge branch 'next/minor' of github.com:Start9Labs/start-os into next/major 2023-11-09 16:33:15 -07:00
Matt Hill
09303ab2fb make it run 2023-11-09 16:04:18 -07:00
Matt Hill
df1ac8e1e2 makefile too 2023-11-09 15:36:20 -07:00
Matt Hill
7a55c91349 rebased and compiling again 2023-11-09 15:35:47 -07:00
Alex Inkin
c491dfdd3a feat: use routes for service sections (#2502)
* feat: use routes for service sections

* chore: fix comment
2023-11-09 12:23:58 -07:00
Matt Hill
d9cc21f761 merge from master and fix typescript errors 2023-11-08 15:44:05 -07:00
Alex Inkin
06207145af refactor: refactor sideload page (#2475)
* refactor: refactor sideload page

* chore: improve ux

* chore: update

* chore: update
2023-11-06 12:05:05 -05:00
Alex Inkin
b195e3435f fix: fix discussed issues (#2467)
* fix: fix discussed issues

* chore: fix issues

* fix package lock

---------

Co-authored-by: Matt Hill <mattnine@protonmail.com>
2023-10-22 16:49:05 -06:00
Aiden McClelland
34b4577c0b Merge branch 'next' of github.com:Start9Labs/start-os into rebase/integration/refactors 2023-10-18 17:55:09 -06:00
Alex Inkin
8034e5bbcb refactor: refactor updates page to get rid of ionic (#2459) 2023-10-15 21:31:34 -06:00
Alex Inkin
df7a30bd14 refactor: refactor backups page to get rid of ionic (#2446) 2023-10-13 09:04:20 -06:00
Matt Hill
d9dfacaaf4 fe fixes after merge 2023-09-28 14:10:04 -06:00
Aiden McClelland
d43767b945 Merge branch 'next' of github.com:Start9Labs/start-os into rebase/integration/refactors 2023-09-28 13:27:41 -06:00
Alex Inkin
cb36754c46 refactor: refactor service page to get rid of ionic (#2421) 2023-09-26 12:37:46 -06:00
Alex Inkin
7e18aafe20 feat(portal): add scrolling to the desktop (#2410)
* feat(portal): add scrolling to the desktop

* chore: comments

* chore: fix
2023-09-13 12:52:25 -06:00
Matt Hill
f7b079b1b4 start menu lol (#2389)
* start menu lol

* add icons, abstract header menu

* chore: few tweaks

---------

Co-authored-by: waterplea <alexander@inkin.ru>
2023-08-14 07:39:07 -06:00
Aiden McClelland
72ffedead7 fix build 2023-08-08 12:03:54 -06:00
Matt Hill
cf3a501562 Merge branch 'rebase/integration/refactors' of github.com:Start9Labs/start-os into rebase/feat/domains 2023-08-08 10:46:07 -06:00
Matt Hill
7becdc3034 remove qr from deprecated component 2023-08-08 10:43:16 -06:00
Aiden McClelland
f0d599781d Merge branch 'rebase/integration/refactors' of github.com:Start9Labs/start-os into rebase/feat/domains 2023-08-08 10:17:28 -06:00
Aiden McClelland
3386105048 Merge branch 'next' of github.com:Start9Labs/start-os into rebase/integration/refactors 2023-08-08 10:08:59 -06:00
Lucy
3b8fb70db1 Fix/shared module build (#2385)
* update shared version so latest can be pulled into brochure marketplace project

* updated shared to work with new brochure marketplace
2023-08-08 09:52:00 -06:00
Matt Hill
c3ae146580 proxies (#2376)
* proxies

* OS outbound proxy. ugly, needs work

* abstract interface address management

* clearnet and outbound proxies for services

* clean up

* router tab

* smart launching of UIs

* update sdk types

* display outbound proxy on service show and rework menu
2023-08-07 15:14:03 -06:00
Alex Inkin
0d079f0d89 feat(portal): implement drag and drop add/remove (#2383) 2023-08-06 13:30:53 -06:00
Alex Inkin
9f5a90ee9c feat(portal): implement adding/removing to desktop (#2374)
* feat(portal): implement adding/removing to desktop, reordering desktop items, baseline for system utils

* chore: fix comments

---------

Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
2023-07-27 12:51:15 -06:00
Matt Hill
a5307fd8cc modernize reset password flow for TUI 2023-07-26 11:53:29 -06:00
Matt Hill
180589144a fix server show type errors 2023-07-26 11:39:20 -06:00
Matt Hill
d9c1867bd7 Merge branch 'rebase/integration/refactors' of github.com:Start9Labs/start-os into rebase/feat/domains 2023-07-26 11:01:56 -06:00
Matt Hill
da37d649ec Merge branch 'master' of github.com:Start9Labs/start-os into rebase/integration/refactors 2023-07-26 10:48:45 -06:00
Alex Inkin
4204b4af90 feat(portal): basis for drawer and cards (#2370) 2023-07-20 18:17:32 -06:00
Mariusz Kogen
941650f668 Fix motd 2023-07-16 18:39:26 +02:00
Alex Inkin
9c0c6c1bd6 feat: basis for portal (#2352) 2023-07-16 09:50:56 -06:00
Matt Hill
bd0ddafcd0 round out adding new domains 2023-07-14 12:53:26 -06:00
Matt Hill
19f5e92a74 drop ipv4 from specific domain config 2023-07-12 17:09:49 -06:00
BluJ
3202c38061 chore: Remove some of the things that are missing 2023-07-12 14:01:56 -06:00
Alex Inkin
e35a8c942b fix: libraries build (#2346) 2023-07-11 17:28:05 -04:00
Matt Hill
31811eb91e fix errors in shared and marketplace 2023-07-11 12:53:42 -06:00
Alex Inkin
b9316a4112 Update angular (#2343)
* chore: update to Angular 15

* chore: update to Angular 16

* chore: update Taiga UI
2023-07-10 13:35:53 -06:00
BluJ
b7abd878ac chore: Use send_modify instead of send 2023-07-10 10:36:16 -06:00
Matt Hill
38c2c47789 Feat/domains
update FE types and unify sideload page with marketplace show

begin popover for UI launch select

update node version for github workflows

fix type errors

eager load more components

fix mocks for types

recalculate updates bad on pkg uninstall

chore: break form-object file structure

files for config

finish file upload API and implement for config

chore: break down form-object by type, part 1

remove NEW from config

comment entire setTimeout for new

generic form options

chore: break down form-object by type, part 2

headers for enums and unions

implement select and multiselect for config

update union types and camel case for specs

implement textarea config value

inputspec and required instead of nullable

remove subtype from list spec

update start-sdk

bump start-sdk

feat: use Taiga UI for config modal (#2250)

* feat: use Taiga UI for config modal

* chore: finish remaining changes

* chore: address comments

* bump sdk version

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

update package lock

update to sdk 20 and fix types

chore: update Taiga UI and migrate some more forms (#2252)

update form to latest sdk

validate length for textarea too

chore: accommodate new changes to the specs (#2254)

* chore: accommodate new changes to the specs

* chore: fix error

* chore: fix error

feat: add input color (#2257)

* feat: add input color

* patterns will always be there

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

chore: properly type pattern error

update to latest sdk

Add sans-serif font fallback (#2263)

* Add sans-serif font fallback

* Update frontend readme start scripts

feat: add datetime spec support (#2264)

Wifi optional (#2249)

* begin work

* allow enable and disable wifi

* nice styling

* done except for popover not dismissing

* update wifi.ts

* address comments

Feat/automated backups (#2142)

* initial restructuring

* very cool

* new structure in place

* delete unnecessary T

* down the rabbit hole

* getting better

* dont like it

* nice

* very nice

* sessions select all

* nice

* backup runs

* fix targets and more

* small improvements

* mostly working

* address PR comments

* fix error

* delete issue with merge

* fix checkboxes and add API for deleting backup runs

* better styling for checkboxes

* small button in ssh kpage too

* complete multiple UI launcher

* fix actions

* present error toast too

* fix target forms

Add logs window to setup wizard loading screen (#2076)

* add logs window to setup wizard loading screen

* fix type error

* Update frontend/projects/setup-wizard/src/app/services/api/live-api.service.ts

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

---------

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

statically type server metrics and use websocket (#2124)

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

Feat/external-smtp (#1791)

* UI for EOS smtp, missing API layer

* implement api

* fix errors

* switch to external smtp creds

* fix things up

* fix types

* update types for new forms

* feat: add new form to emails and marketplace (#2268)

* import tuilet module

* feat: get rid of old form completely (#2270)

* move to builder spec and delete developer menu

* update sdk

* tiny

* getting better

* working

* done

* feat: add step to number config

* chore: small fixes

* update SDK and step for numbers

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>

latest sdk, fix build

update SDK for better disabled props

feat: implement `disabled`, `immutable` and `generate` (#2280)

* feat: implement `disabled`, `immutable` and `generate`

* chore: remove unnecessary code

* chore: add generate to textarea and implement immutable

* no generate for textarea

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

update lockfile

refactor: extract loading status to shared library (#2282)

* refactor: extract loading status to shared library

* chore: remove inline style

refactor: break routing down to apps level (#2285)

closes #2212 and closes #2214

Feat/credentials (#2290)

add credentials and remove properties

refactor: break ui up further down (#2292)

* refactor: break ui up further down

* permit loading even when authed

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

update patchdb for package compatability fixes

fix file structure

WIP

finish rebase

mvp complete

port forwards mvp

looking good

cleaner system page

move experimental features

manual port overrides

better info headers for jobs pages

refactor: move diagnostic-ui app under ui route (#2306)

* refactor: move diagnostic-ui app under ui route

* chore: hide navigation

* chore: remove ionic from diagnostic

* fix navbar showing on login

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

chore: partially remove ionic modals and loaders (#2308)

* chore: partially remove ionic modals and loaders

* change to snake

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

better session data fetching

abstract store icon component to shared marketplace project (#2311)

* abstract store icon component to shared marketplace project

* better than using a pipe

* minor cleanup

* chore: fix missing node types in libraries

* typo

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: waterplea <alexander@inkin.ru>

refactor: continue to get rid of ionic infrastructure (#2325)

refactor: finish removing ionic entities: (#2333)

* refactor: finish removing ionic entities:

ToastController
ErrorToastService
ModalController
AlertController
LoadingController

* chore: rollback testing code

* chore: fix comments

* minor form change

* chore: fix comments

* update clearnet address parts

* move around patchDB

* chore: fix comments

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

fixup after rebase
2023-07-07 12:59:31 -06:00
Aiden McClelland
c03778ec8b fixup after rebase 2023-07-07 09:53:03 -06:00
BluJ
29b0850a94 fix: Add in the sub repos 2023-07-06 15:16:17 -06:00
Matt Hill
712fde46eb fix file structure 2023-07-06 15:12:02 -06:00
Lucy Cifferello
c2e79ca5a7 update patchdb for package compatability fixes 2023-07-06 15:12:02 -06:00
Alex Inkin
c3a52b3989 refactor: break ui up further down (#2292)
* refactor: break ui up further down

* permit loading even when authed

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2023-07-06 15:12:02 -06:00
Matt Hill
7213d82f1b Feat/credentials (#2290)
add credentials and remove properties
2023-07-06 15:12:02 -06:00
Matt Hill
5bcad69cf7 closes #2212 and closes #2214 2023-07-06 15:12:02 -06:00
Alex Inkin
c9a487fa4d refactor: break routing down to apps level (#2285) 2023-07-06 15:11:13 -06:00
Alex Inkin
3804a46f3b refactor: extract loading status to shared library (#2282)
* refactor: extract loading status to shared library

* chore: remove inline style
2023-07-06 15:11:13 -06:00
Matt Hill
52c0bb5302 update lockfile 2023-07-06 15:10:56 -06:00
Alex Inkin
8aa19e6420 feat: implement disabled, immutable and generate (#2280)
* feat: implement `disabled`, `immutable` and `generate`

* chore: remove unnecessary code

* chore: add generate to textarea and implement immutable

* no generate for textarea

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2023-07-06 15:10:56 -06:00
Matt Hill
4d1c7a3884 update SDK for better disabled props 2023-07-06 15:10:56 -06:00
Matt Hill
25f2c057b7 latest sdk, fix build 2023-07-06 15:10:56 -06:00
Matt Hill
010be05920 Feat/external-smtp (#1791)
* UI for EOS smtp, missing API layer

* implement api

* fix errors

* switch to external smtp creds

* fix things up

* fix types

* update types for new forms

* feat: add new form to emails and marketplace (#2268)

* import tuilet module

* feat: get rid of old form completely (#2270)

* move to builder spec and delete developer menu

* update sdk

* tiny

* getting better

* working

* done

* feat: add step to number config

* chore: small fixes

* update SDK and step for numbers

---------

Co-authored-by: Alex Inkin <alexander@inkin.ru>
2023-07-06 15:10:56 -06:00
Matt Hill
4c465850a2 lockfile 2023-07-06 15:10:56 -06:00
Aiden McClelland
8313dfaeb9 statically type server metrics and use websocket (#2124)
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2023-07-06 15:10:56 -06:00
Matt Hill
873f2b2814 Add logs window to setup wizard loading screen (#2076)
* add logs window to setup wizard loading screen

* fix type error

* Update frontend/projects/setup-wizard/src/app/services/api/live-api.service.ts

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

---------

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
2023-07-06 15:10:56 -06:00
Matt Hill
e53c90f8f0 Feat/automated backups (#2142)
* initial restructuring

* very cool

* new structure in place

* delete unnecessary T

* down the rabbit hole

* getting better

* dont like it

* nice

* very nice

* sessions select all

* nice

* backup runs

* fix targets and more

* small improvements

* mostly working

* address PR comments

* fix error

* delete issue with merge

* fix checkboxes and add API for deleting backup runs

* better styling for checkboxes

* small button in ssh kpage too

* complete multiple UI launcher

* fix actions

* present error toast too

* fix target forms
2023-07-06 15:10:56 -06:00
Matt Hill
9499ea8ca9 Wifi optional (#2249)
* begin work

* allow enable and disable wifi

* nice styling

* done except for popover not dismissing

* update wifi.ts

* address comments
2023-07-06 15:10:43 -06:00
Alex Inkin
f6c09109ba feat: add datetime spec support (#2264) 2023-07-06 15:10:43 -06:00
Benjamin B
273b5768c4 Add sans-serif font fallback (#2263)
* Add sans-serif font fallback

* Update frontend readme start scripts
2023-07-06 15:10:43 -06:00
Matt Hill
ee13cf7dd9 update to latest sdk 2023-07-06 15:10:43 -06:00
waterplea
fecbae761e chore: properly type pattern error 2023-07-06 15:10:43 -06:00
Alex Inkin
e0ee89bdd9 feat: add input color (#2257)
* feat: add input color

* patterns will always be there

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2023-07-06 15:10:43 -06:00
Alex Inkin
833c1f22a3 chore: accommodate new changes to the specs (#2254)
* chore: accommodate new changes to the specs

* chore: fix error

* chore: fix error
2023-07-06 15:10:43 -06:00
Matt Hill
6fed6c8d30 validate length for textarea too 2023-07-06 15:10:43 -06:00
Matt Hill
94cdaf5314 update form to latest sdk 2023-07-06 15:10:43 -06:00
Alex Inkin
f83ae27352 chore: update Taiga UI and migrate some more forms (#2252) 2023-07-06 15:10:43 -06:00
Matt Hill
6badf047c3 update to sdk 20 and fix types 2023-07-06 15:10:43 -06:00
Matt Hill
47de9ad15f update package lock 2023-07-06 15:10:43 -06:00
Alex Inkin
09b91cc663 feat: use Taiga UI for config modal (#2250)
* feat: use Taiga UI for config modal

* chore: finish remaining changes

* chore: address comments

* bump sdk version

---------

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2023-07-06 15:10:43 -06:00
Matt Hill
ded16549f7 bump start-sdk 2023-07-06 15:10:43 -06:00
Matt Hill
c89e47577b update start-sdk 2023-07-06 15:10:43 -06:00
Matt Hill
bb50beb7ab remove subtype from list spec 2023-07-06 15:10:43 -06:00
Matt Hill
e4cd4d64d7 inputspec and required instead of nullable 2023-07-06 15:10:43 -06:00
Matt Hill
5675fc51a0 implement textarea config value 2023-07-06 15:10:43 -06:00
Matt Hill
c7438c4aff update union types and camel case for specs 2023-07-06 15:10:43 -06:00
Matt Hill
4a6a3da36c implement select and multiselect for config 2023-07-06 15:10:43 -06:00
Matt Hill
a657c332b1 headers for enums and unions 2023-07-06 15:10:43 -06:00
waterplea
cc9cd3fc14 chore: break down form-object by type, part 2 2023-07-06 15:10:43 -06:00
Matt Hill
234258a077 generic form options 2023-07-06 15:10:43 -06:00
Matt Hill
13cda80ee6 comment entire setTimeout for new 2023-07-06 15:10:43 -06:00
Matt Hill
f6e142baf5 remove NEW from config 2023-07-06 15:10:43 -06:00
waterplea
ddf1f9bcd5 chore: break down form-object by type, part 1 2023-07-06 15:10:43 -06:00
Matt Hill
aa950669f6 finish file upload API and implement for config 2023-07-06 15:10:43 -06:00
Matt Hill
dacd5d3e6b files for config 2023-07-06 15:09:48 -06:00
waterplea
e76ccba2f7 chore: break form-object file structure 2023-07-06 15:09:48 -06:00
Matt Hill
3933819d53 recalculate updates bad on pkg uninstall 2023-07-06 15:09:48 -06:00
Matt Hill
99019c2b1f fix mocks for types 2023-07-06 15:09:48 -06:00
Matt Hill
4bf5eb398b eager load more components 2023-07-06 15:09:48 -06:00
Matt Hill
dbfbac62c0 fix type errors 2023-07-06 15:09:48 -06:00
Lucy Cifferello
7685293da4 update node version for github workflows 2023-07-06 15:09:48 -06:00
Matt Hill
ee9c328606 begin popover for UI launch select 2023-07-06 15:09:48 -06:00
Matt Hill
cb7790ccba update FE types and unify sideload page with marketplace show 2023-07-06 15:09:48 -06:00
Matt Hill
6556fcc531 fix types 2023-07-06 15:08:30 -06:00
Aiden McClelland
178391e7b2 integration/refactors
wip: Refactoring the service

-> Made new skeleton
-> Added service manager
-> Manager Refactored
-> Cleanup
-> Add gid struct
-> remove synchronizer
-> Added backup into manager
-> Fix the configure signal not send
-> Fixes around backup and sync

wip: Moved over the config into the service manager

js effect for subscribing to config

js effect for subscribing to config

fix errors

chore: Fix some things in the manager for clippy

add interfaces from manifest automatically

make OsApi manager-based

wip: Starting down the bind for the effects

todo: complete a ip todo

chore: Fix the result type on something

todo: Address returning

chore: JS with callbacks

chore: Add in the chown and permissions

chore: Add in the binds and unbinds in

feat: Add in the ability to get configs

makefile changes

add start/stop/restart to effects

config hooks

fix: add a default always to the get status

chore: Only do updates when the thing is installed.

use nistp256 to satisfy firefox

use ed25519 if available

chore: Make the thing buildable for testing

chore: Add in the debugging

fix ip signing

chore: Remove the bluj tracing

fix SQL error

chore: Fix the build

update prettytable to fix segfault

Chore: Make these fn's instead of allways ran.

chore: Fix the testing

fix: The stopping/ restarting service

fix: Fix the restarting.

remove current-dependents, derive instead

remove pointers from current-dependencies

remove pointers and system pointers from FE

v0.3.4

remove health checks from manifest

remove "restarting" bool on "starting" status

remove restarting attr

update makefile

fix

add efi support

fix efi

add redirect if connecting to https over http

clean up

lan port forwarding

add `make update` and `make update-overlay`

fix migration

more protections

fix: Fix a lint

chore: remove the limit on the long-running

fix: Starting sometimes.

fix: Make it so the stop of the main works

fix: Bind local and tor with package.

wip: envs

closes #2152, closes #2155, closes #2157

fix TS error

import config types from sdk

update package.json
2023-07-06 15:08:30 -06:00
Aiden McClelland
18922a1c6d fixup after rebase 2023-07-06 15:08:30 -06:00
BluJ
5e9e26fa67 fix: Fix a lint
chore: remove the limit on the long-running

fix: Starting sometimes.

fix: Make it so the stop of the main works

fix: Bind local and tor with package.

wip: envs

fix TS error

import config types from sdk

update package.json
2023-07-06 15:08:30 -06:00
Aiden McClelland
f5430f9151 lan port forwarding 2023-07-06 15:08:30 -06:00
Aiden McClelland
4dfdf2f92f v0.3.4
remove health checks from manifest

remove "restarting" bool on "starting" status

remove restarting attr
2023-07-06 15:08:30 -06:00
Matt Hill
e4d283cc99 remove current-dependents, derive instead
remove pointers from current-dependencies

remove pointers and system pointers from FE
2023-07-06 15:08:30 -06:00
Aiden McClelland
8ee64d22b3 add start/stop/restart to effects
fix: add a default always to the get status

chore: Only do updates when the thing is installed.

chore: Make the thing buildable for testing

chore: Add in the debugging

chore: Remove the bluj tracing

chore: Fix the build

Chore: Make these fn's instead of allways ran.

chore: Fix the testing

fix: The stopping/ restarting service

fix: Fix the restarting.
2023-07-06 15:08:30 -06:00
Aiden McClelland
10e3e80042 makefile changes 2023-07-06 15:08:30 -06:00
BluJ
f77a208e2c feat: Add in the ability to get configs
config hooks
2023-07-06 15:07:53 -06:00
BluJ
9366dbb96e wip: Starting down the bind for the effects
todo: complete a ip todo

chore: Fix the result type on something

todo: Address returning

chore: JS with callbacks

chore: Add in the chown and permissions

chore: Add in the binds and unbinds in
2023-07-06 15:07:53 -06:00
Aiden McClelland
550b17552b make OsApi manager-based 2023-07-06 15:07:53 -06:00
Aiden McClelland
bec307d0e9 js effect for subscribing to config
fix errors

chore: Fix some things in the manager for clippy
2023-07-06 15:07:53 -06:00
BluJ
93c751f6eb wip: Refactoring the service
-> Made new skeleton
-> Added service manager
-> Manager Refactored
-> Cleanup
-> Add gid struct
-> remove synchronizer
-> Added backup into manager
-> Fix the configure signal not send
-> Fixes around backup and sync

wip: Moved over the config into the service manager
2023-07-06 15:07:53 -06:00
1697 changed files with 84501 additions and 65274 deletions

View File

@@ -28,6 +28,7 @@ on:
- aarch64
- aarch64-nonfree
- raspberrypi
- riscv64
deploy:
type: choice
description: Deploy
@@ -45,7 +46,7 @@ on:
- next/*
env:
NODEJS_VERSION: "20.16.0"
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: '${{ fromJson(format(''["{0}", ""]'', github.event.inputs.environment || ''dev''))[github.event.inputs.environment == ''NONE''] }}'
jobs:
@@ -62,6 +63,7 @@ jobs:
"aarch64": ["aarch64"],
"aarch64-nonfree": ["aarch64"],
"raspberrypi": ["aarch64"],
"riscv64": ["riscv64"],
"ALL": ["x86_64", "aarch64"]
}')[github.event.inputs.platform || 'ALL']
}}
@@ -93,8 +95,24 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Configure sccache
uses: actions/github-script@v7
with:
script: |
core.exportVariable('ACTIONS_RESULTS_URL', process.env.ACTIONS_RESULTS_URL || '');
core.exportVariable('ACTIONS_RUNTIME_TOKEN', process.env.ACTIONS_RUNTIME_TOKEN || '');
- name: Use Beta Toolchain
run: rustup default beta
- name: Setup Cross
run: cargo install cross --git https://github.com/cross-rs/cross
- name: Make
run: make ARCH=${{ matrix.arch }} compiled-${{ matrix.arch }}.tar
env:
SCCACHE_GHA_ENABLED: on
SCCACHE_GHA_VERSION: 0
- uses: actions/upload-artifact@v4
with:
@@ -129,6 +147,7 @@ jobs:
"aarch64": "buildjet-8vcpu-ubuntu-2204-arm",
"aarch64-nonfree": "buildjet-8vcpu-ubuntu-2204-arm",
"raspberrypi": "buildjet-8vcpu-ubuntu-2204-arm",
"riscv64": "buildjet-8vcpu-ubuntu-2204",
}')[matrix.platform]
)
)[github.event.inputs.runner == 'fast']
@@ -142,6 +161,7 @@ jobs:
"aarch64": "aarch64",
"aarch64-nonfree": "aarch64",
"raspberrypi": "aarch64",
"riscv64": "riscv64",
}')[matrix.platform]
}}
steps:

View File

@@ -11,7 +11,7 @@ on:
- next/*
env:
NODEJS_VERSION: "20.16.0"
NODEJS_VERSION: "24.11.0"
ENVIRONMENT: dev-unstable
jobs:
@@ -27,5 +27,11 @@ jobs:
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Use Beta Toolchain
run: rustup default beta
- name: Setup Cross
run: cargo install cross --git https://github.com/cross-rs/cross
- name: Build And Run Tests
run: make test

3
.gitignore vendored
View File

@@ -1,8 +1,5 @@
.DS_Store
.idea
system-images/binfmt/binfmt.tar
system-images/compat/compat.tar
system-images/util/util.tar
/*.img
/*.img.gz
/*.img.xz

View File

@@ -1,7 +1,6 @@
# Contributing to StartOS
This guide is for contributing to the StartOS. If you are interested in packaging a service for StartOS, visit the [service packaging guide](https://docs.start9.com/latest/developer-docs/). If you are interested in promoting, providing technical support, creating tutorials, or helping in other ways, please visit the [Start9 website](https://start9.com/contribute).
This guide is for contributing to the StartOS. If you are interested in packaging a service for StartOS, visit the [service packaging guide](https://docs.start9.com/latest/packaging-guide/). If you are interested in promoting, providing technical support, creating tutorials, or helping in other ways, please visit the [Start9 website](https://start9.com/contribute).
## Collaboration
@@ -13,64 +12,77 @@ This guide is for contributing to the StartOS. If you are interested in packagin
```bash
/
├── assets/
├── container-runtime/
├── core/
├── build/
├── debian/
├── web/
├── image-recipe/
├── patch-db
└── system-images/
└── sdk/
```
#### assets
screenshots for the StartOS README
#### container-runtime
A NodeJS program that dynamically loads maintainer scripts and communicates with the OS to manage packages
#### core
An API, daemon (startd), CLI (start-cli), and SDK (start-sdk) that together provide the core functionality of StartOS.
An API, daemon (startd), and CLI (start-cli) that together provide the core functionality of StartOS.
#### build
Auxiliary files and scripts to include in deployed StartOS images
#### debian
Maintainer scripts for the StartOS Debian package
#### web
Web UIs served under various conditions and used to interact with StartOS APIs.
#### image-recipe
Scripts for building StartOS images
#### patch-db (submodule)
A diff based data store used to synchronize data between the web interfaces and server.
#### system-images
Docker images that assist with creating backups.
#### sdk
A typescript sdk for building start-os packages
## Environment Setup
#### Clone the StartOS repository
```sh
git clone https://github.com/Start9Labs/start-os.git
git clone https://github.com/Start9Labs/start-os.git --recurse-submodules
cd start-os
```
#### Load the PatchDB submodule
```sh
git submodule update --init --recursive
```
#### Continue to your project of interest for additional instructions:
- [`core`](core/README.md)
- [`web-interfaces`](web-interfaces/README.md)
- [`build`](build/README.md)
- [`patch-db`](https://github.com/Start9Labs/patch-db)
## Building
This project uses [GNU Make](https://www.gnu.org/software/make/) to build its components. To build any specific component, simply run `make <TARGET>` replacing `<TARGET>` with the name of the target you'd like to build
### Requirements
- [GNU Make](https://www.gnu.org/software/make/)
- [Docker](https://docs.docker.com/get-docker/)
- [NodeJS v18.15.0](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
- [NodeJS v20.16.0](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm)
- [sed](https://www.gnu.org/software/sed/)
- [grep](https://www.gnu.org/software/grep/)
- [awk](https://www.gnu.org/software/gawk/)
@@ -79,41 +91,43 @@ This project uses [GNU Make](https://www.gnu.org/software/make/) to build its co
- [brotli](https://github.com/google/brotli)
### Environment variables
- `PLATFORM`: which platform you would like to build for. Must be one of `x86_64`, `x86_64-nonfree`, `aarch64`, `aarch64-nonfree`, `raspberrypi`
- NOTE: `nonfree` images are for including `nonfree` firmware packages in the built ISO
- NOTE: `nonfree` images are for including `nonfree` firmware packages in the built ISO
- `ENVIRONMENT`: a hyphen separated set of feature flags to enable
- `dev`: enables password ssh (INSECURE!) and does not compress frontends
- `unstable`: enables assertions that will cause errors on unexpected inconsistencies that are undesirable in production use either for performance or reliability reasons
- `docker`: use `docker` instead of `podman`
- `dev`: enables password ssh (INSECURE!) and does not compress frontends
- `unstable`: enables assertions that will cause errors on unexpected inconsistencies that are undesirable in production use either for performance or reliability reasons
- `docker`: use `docker` instead of `podman`
- `GIT_BRANCH_AS_HASH`: set to `1` to use the current git branch name as the git hash so that the project does not need to be rebuilt on each commit
### Useful Make Targets
- `iso`: Create a full `.iso` image
- Only possible from Debian
- Not available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- Only possible from Debian
- Not available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- `img`: Create a full `.img` image
- Only possible from Debian
- Only available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- Only possible from Debian
- Only available for `PLATFORM=raspberrypi`
- Additional Requirements:
- [debspawn](https://github.com/lkhq/debspawn)
- `format`: Run automatic code formatting for the project
- Additional Requirements:
- [rust](https://rustup.rs/)
- Additional Requirements:
- [rust](https://rustup.rs/)
- `test`: Run automated tests for the project
- Additional Requirements:
- [rust](https://rustup.rs/)
- Additional Requirements:
- [rust](https://rustup.rs/)
- `update`: Deploy the current working project to a device over ssh as if through an over-the-air update
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `reflash`: Deploy the current working project to a device over ssh as if using a live `iso` image to reflash it
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `update-overlay`: Deploy the current working project to a device over ssh to the in-memory overlay without restarting it
- WARNING: changes will be reverted after the device is rebooted
- WARNING: changes to `init` will not take effect as the device is already initialized
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- WARNING: changes will be reverted after the device is rebooted
- WARNING: changes to `init` will not take effect as the device is already initialized
- Requires an argument `REMOTE` which is the ssh address of the device, i.e. `start9@192.168.122.2`
- `wormhole`: Deploy the `startbox` to a device using [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- When the build it complete will emit a command to paste into the shell of the device to upgrade it
- Additional Requirements:
- [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- `clean`: Delete all compiled artifacts
- When the build it complete will emit a command to paste into the shell of the device to upgrade it
- Additional Requirements:
- [magic-wormhole](https://github.com/magic-wormhole/magic-wormhole)
- `clean`: Delete all compiled artifacts

View File

@@ -25,15 +25,15 @@ docker buildx create --use
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # proceed with default installation
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash
source ~/.bashrc
nvm install 20
nvm use 20
nvm alias default 20 # this prevents your machine from reverting back to another version
nvm install 24
nvm use 24
nvm alias default 24 # this prevents your machine from reverting back to another version
```
## Cloning the repository
```sh
git clone --recursive https://github.com/Start9Labs/start-os.git --branch next/minor
git clone --recursive https://github.com/Start9Labs/start-os.git --branch next/major
cd start-os
```

213
Makefile
View File

@@ -1,31 +1,45 @@
ls-files = $(shell git ls-files --cached --others --exclude-standard $1)
PROFILE = release
PLATFORM_FILE := $(shell ./check-platform.sh)
ENVIRONMENT_FILE := $(shell ./check-environment.sh)
GIT_HASH_FILE := $(shell ./check-git-hash.sh)
VERSION_FILE := $(shell ./check-version.sh)
BASENAME := $(shell ./basename.sh)
BASENAME := $(shell PROJECT=startos ./basename.sh)
PLATFORM := $(shell if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)
ARCH := $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo aarch64; else echo $(PLATFORM) | sed 's/-nonfree$$//g'; fi)
RUST_ARCH := $(shell if [ "$(ARCH)" = "riscv64" ]; then echo riscv64gc; else echo $(ARCH); fi)
REGISTRY_BASENAME := $(shell PROJECT=start-registry PLATFORM=$(ARCH) ./basename.sh)
TUNNEL_BASENAME := $(shell PROJECT=start-tunnel PLATFORM=$(ARCH) ./basename.sh)
IMAGE_TYPE=$(shell if [ "$(PLATFORM)" = raspberrypi ]; then echo img; else echo iso; fi)
WEB_UIS := web/dist/raw/ui/index.html web/dist/raw/setup-wizard/index.html web/dist/raw/install-wizard/index.html
COMPRESSED_WEB_UIS := web/dist/static/ui/index.html web/dist/static/setup-wizard/index.html web/dist/static/install-wizard/index.html
FIRMWARE_ROMS := ./firmware/$(PLATFORM) $(shell jq --raw-output '.[] | select(.platform[] | contains("$(PLATFORM)")) | "./firmware/$(PLATFORM)/" + .id + ".rom.gz"' build/lib/firmware.json)
BUILD_SRC := $(shell git ls-files build) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
DEBIAN_SRC := $(shell git ls-files debian/)
IMAGE_RECIPE_SRC := $(shell git ls-files image-recipe/)
BUILD_SRC := $(call ls-files, build) build/lib/depends build/lib/conflicts $(FIRMWARE_ROMS)
IMAGE_RECIPE_SRC := $(call ls-files, image-recipe/)
STARTD_SRC := core/startos/startd.service $(BUILD_SRC)
COMPAT_SRC := $(shell git ls-files system-images/compat/)
UTILS_SRC := $(shell git ls-files system-images/utils/)
BINFMT_SRC := $(shell git ls-files system-images/binfmt/)
CORE_SRC := $(shell git ls-files core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE)
WEB_SHARED_SRC := $(shell git ls-files web/projects/shared) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json
WEB_UI_SRC := $(shell git ls-files web/projects/ui)
WEB_SETUP_WIZARD_SRC := $(shell git ls-files web/projects/setup-wizard)
WEB_INSTALL_WIZARD_SRC := $(shell git ls-files web/projects/install-wizard)
CORE_SRC := $(call ls-files, core) $(shell git ls-files --recurse-submodules patch-db) $(GIT_HASH_FILE)
WEB_SHARED_SRC := $(call ls-files, web/projects/shared) $(call ls-files, web/projects/marketplace) $(shell ls -p web/ | grep -v / | sed 's/^/web\//g') web/node_modules/.package-lock.json web/config.json patch-db/client/dist/index.js sdk/baseDist/package.json web/patchdb-ui-seed.json sdk/dist/package.json
WEB_UI_SRC := $(call ls-files, web/projects/ui)
WEB_SETUP_WIZARD_SRC := $(call ls-files, web/projects/setup-wizard)
WEB_INSTALL_WIZARD_SRC := $(call ls-files, web/projects/install-wizard)
WEB_START_TUNNEL_SRC := $(call ls-files, web/projects/start-tunnel)
PATCH_DB_CLIENT_SRC := $(shell git ls-files --recurse-submodules patch-db/client)
GZIP_BIN := $(shell which pigz || which gzip)
TAR_BIN := $(shell which gtar || which tar)
COMPILED_TARGETS := core/target/$(ARCH)-unknown-linux-musl/release/startbox core/target/$(ARCH)-unknown-linux-musl/release/containerbox system-images/compat/docker-images/$(ARCH).tar system-images/utils/docker-images/$(ARCH).tar system-images/binfmt/docker-images/$(ARCH).tar container-runtime/rootfs.$(ARCH).squashfs
ALL_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs $(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; fi) $(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then echo cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console; fi') $(PLATFORM_FILE)
COMPILED_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox container-runtime/rootfs.$(ARCH).squashfs
STARTOS_TARGETS := $(STARTD_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(VERSION_FILE) $(COMPILED_TARGETS) cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs $(PLATFORM_FILE) \
$(shell if [ "$(PLATFORM)" = "raspberrypi" ]; then \
echo cargo-deps/aarch64-unknown-linux-musl/release/pi-beep; \
fi) \
$(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]; then \
echo cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph; \
fi') \
$(shell /bin/bash -c 'if [[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]; then \
echo cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console; \
fi')
REGISTRY_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox core/startos/start-registryd.service
TUNNEL_TARGETS := core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service
REBUILD_TYPES = 1
ifeq ($(REMOTE),)
@@ -49,21 +63,16 @@ endif
.DELETE_ON_ERROR:
.PHONY: all metadata install clean format cli uis ui reflash deb $(IMAGE_TYPE) squashfs sudo wormhole wormhole-deb test test-core test-sdk test-container-runtime registry
.PHONY: all metadata install clean format cli uis ui reflash deb $(IMAGE_TYPE) squashfs wormhole wormhole-deb test test-core test-sdk test-container-runtime registry install-registry tunnel install-tunnel
all: $(ALL_TARGETS)
all: $(STARTOS_TARGETS)
touch:
touch $(ALL_TARGETS)
touch $(STARTOS_TARGETS)
metadata: $(VERSION_FILE) $(PLATFORM_FILE) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE)
sudo:
sudo true
clean:
rm -f system-images/**/*.tar
rm -rf system-images/compat/target
rm -rf core/target
rm -rf core/startos/bindings
rm -rf web/.angular
@@ -98,25 +107,60 @@ test: | test-core test-sdk test-container-runtime
test-core: $(CORE_SRC) $(ENVIRONMENT_FILE)
./core/run-tests.sh
test-sdk: $(shell git ls-files sdk) sdk/base/lib/osBindings/index.ts
test-sdk: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
cd sdk && make test
test-container-runtime: container-runtime/node_modules/.package-lock.json $(shell git ls-files container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
test-container-runtime: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
cd container-runtime && npm test
cli:
cd core && ./install-cli.sh
./core/install-cli.sh
registry:
cd core && ./build-registrybox.sh
registry: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox
install-registry: $(REGISTRY_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox,$(DESTDIR)/usr/bin/start-registrybox)
$(call ln,/usr/bin/start-registrybox,$(DESTDIR)/usr/bin/start-registryd)
$(call ln,/usr/bin/start-registrybox,$(DESTDIR)/usr/bin/start-registry)
$(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/start-registryd.service,$(DESTDIR)/lib/systemd/system/start-registryd.service)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/registrybox: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-registrybox.sh
tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox
install-tunnel: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox core/startos/start-tunneld.service
$(call mkdir,$(DESTDIR)/usr/bin)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox,$(DESTDIR)/usr/bin/start-tunnelbox)
$(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunneld)
$(call ln,/usr/bin/start-tunnelbox,$(DESTDIR)/usr/bin/start-tunnel)
$(call mkdir,$(DESTDIR)/lib/systemd/system)
$(call cp,core/startos/start-tunneld.service,$(DESTDIR)/lib/systemd/system/start-tunneld.service)
$(call mkdir,$(DESTDIR)/usr/lib/startos/scripts)
$(call cp,build/lib/scripts/forward-port,$(DESTDIR)/usr/lib/startos/scripts/forward-port)
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/tunnelbox: $(CORE_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) web/dist/static/start-tunnel/index.html
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-tunnelbox.sh
deb: results/$(BASENAME).deb
debian/control: build/lib/depends build/lib/conflicts
./debuild/control.sh
results/$(BASENAME).deb: dpkg-build.sh $(call ls-files,debian/startos) $(STARTOS_TARGETS)
PLATFORM=$(PLATFORM) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh
results/$(BASENAME).deb: dpkg-build.sh $(DEBIAN_SRC) $(ALL_TARGETS)
PLATFORM=$(PLATFORM) ./dpkg-build.sh
registry-deb: results/$(REGISTRY_BASENAME).deb
results/$(REGISTRY_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-registry) $(REGISTRY_TARGETS)
PROJECT=start-registry PLATFORM=$(ARCH) REQUIRES=debian ./build/os-compat/run-compat.sh ./dpkg-build.sh
tunnel-deb: results/$(TUNNEL_BASENAME).deb
results/$(TUNNEL_BASENAME).deb: dpkg-build.sh $(call ls-files,debian/start-tunnel) $(TUNNEL_TARGETS)
PROJECT=start-tunnel PLATFORM=$(ARCH) REQUIRES=debian DEPENDS=wireguard-tools,iptables ./build/os-compat/run-compat.sh ./dpkg-build.sh
$(IMAGE_TYPE): results/$(BASENAME).$(IMAGE_TYPE)
@@ -126,16 +170,20 @@ results/$(BASENAME).$(IMAGE_TYPE) results/$(BASENAME).squashfs: $(IMAGE_RECIPE_S
./image-recipe/run-local-build.sh "results/$(BASENAME).deb"
# For creating os images. DO NOT USE
install: $(ALL_TARGETS)
install: $(STARTOS_TARGETS)
$(call mkdir,$(DESTDIR)/usr/bin)
$(call mkdir,$(DESTDIR)/usr/sbin)
$(call cp,core/target/$(ARCH)-unknown-linux-musl/release/startbox,$(DESTDIR)/usr/bin/startbox)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox,$(DESTDIR)/usr/bin/startbox)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/startd)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-cli)
$(call ln,/usr/bin/startbox,$(DESTDIR)/usr/bin/start-sdk)
if [ "$(PLATFORM)" = "raspberrypi" ]; then $(call cp,cargo-deps/aarch64-unknown-linux-musl/release/pi-beep,$(DESTDIR)/usr/bin/pi-beep); fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then $(call cp,cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); fi
$(call cp,cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs,$(DESTDIR)/usr/bin/startos-backup-fs)
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)unstable($$|-) ]]'; then \
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph,$(DESTDIR)/usr/bin/flamegraph); \
fi
if /bin/bash -c '[[ "${ENVIRONMENT}" =~ (^|-)console($$|-) ]]'; then \
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console,$(DESTDIR)/usr/bin/tokio-console); \
fi
$(call cp,cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs,$(DESTDIR)/usr/bin/startos-backup-fs)
$(call ln,/usr/bin/startos-backup-fs,$(DESTDIR)/usr/sbin/mount.backup-fs)
$(call mkdir,$(DESTDIR)/lib/systemd/system)
@@ -152,13 +200,9 @@ install: $(ALL_TARGETS)
$(call cp,GIT_HASH.txt,$(DESTDIR)/usr/lib/startos/GIT_HASH.txt)
$(call cp,VERSION.txt,$(DESTDIR)/usr/lib/startos/VERSION.txt)
$(call mkdir,$(DESTDIR)/usr/lib/startos/system-images)
$(call cp,system-images/compat/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/compat.tar)
$(call cp,system-images/utils/docker-images/$(ARCH).tar,$(DESTDIR)/usr/lib/startos/system-images/utils.tar)
$(call cp,firmware/$(PLATFORM),$(DESTDIR)/usr/lib/startos/firmware)
update-overlay: $(ALL_TARGETS)
update-overlay: $(STARTOS_TARGETS)
@echo "\033[33m!!! THIS WILL ONLY REFLASH YOUR DEVICE IN MEMORY !!!\033[0m"
@echo "\033[33mALL CHANGES WILL BE REVERTED IF YOU RESTART THE DEVICE\033[0m"
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
@@ -167,10 +211,10 @@ update-overlay: $(ALL_TARGETS)
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) PLATFORM=$(PLATFORM)
$(call ssh,"sudo systemctl start startd")
wormhole: core/target/$(ARCH)-unknown-linux-musl/release/startbox
wormhole: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
@echo "Paste the following command into the shell of your StartOS server:"
@echo
@wormhole send core/target/$(ARCH)-unknown-linux-musl/release/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
@wormhole send core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo /usr/lib/startos/scripts/chroot-and-upgrade \"cd /usr/bin && rm startbox && wormhole receive --accept-file %s && chmod +x startbox\"\n", $$3 }'
wormhole-deb: results/$(BASENAME).deb
@echo "Paste the following command into the shell of your StartOS server:"
@@ -182,18 +226,18 @@ wormhole-squashfs: results/$(BASENAME).squashfs
$(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}'))
@echo "Paste the following command into the shell of your StartOS server:"
@echo
@wormhole send results/$(BASENAME).squashfs 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo sh -c '"'"'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE) && cd /media/startos/images && wormhole receive --accept-file %s && mv $(BASENAME).squashfs $(SQFS_SUM).rootfs && ln -rsf ./$(SQFS_SUM).rootfs ../config/current.rootfs && sync && reboot'"'"'\n", $$3 }'
@wormhole send results/$(BASENAME).squashfs 2>&1 | awk -Winteractive '/wormhole receive/ { printf "sudo sh -c '"'"'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE) && /usr/lib/startos/scripts/prune-boot && cd /media/startos/images && wormhole receive --accept-file %s && CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/use-img ./$(BASENAME).squashfs'"'"'\n", $$3 }'
update: $(ALL_TARGETS)
update: $(STARTOS_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) DESTDIR=/media/startos/next PLATFORM=$(PLATFORM)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync "apt-get install -y $(shell cat ./build/lib/depends)"')
update-startbox: core/target/$(ARCH)-unknown-linux-musl/release/startbox # only update binary (faster than full update)
update-startbox: core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox # only update binary (faster than full update)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(call cp,core/target/$(ARCH)-unknown-linux-musl/release/startbox,/media/startos/next/usr/bin/startbox)
$(call cp,core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox,/media/startos/next/usr/bin/startbox)
$(call ssh,'sudo /media/startos/next/usr/lib/startos/scripts/chroot-and-upgrade --no-sync true')
update-deb: results/$(BASENAME).deb # better than update, but only available from debian
@@ -208,11 +252,11 @@ update-squashfs: results/$(BASENAME).squashfs
$(eval SQFS_SUM := $(shell b3sum results/$(BASENAME).squashfs))
$(eval SQFS_SIZE := $(shell du -s --bytes results/$(BASENAME).squashfs | awk '{print $$1}'))
$(call ssh,'/usr/lib/startos/scripts/prune-images $(SQFS_SIZE)')
$(call cp,results/$(BASENAME).squashfs,/media/startos/images/$(SQFS_SUM).rootfs)
$(call ssh,'sudo ln -rsf /media/startos/images/$(SQFS_SUM).rootfs /media/startos/config/current.rootfs')
$(call ssh,'sudo reboot')
$(call ssh,'/usr/lib/startos/scripts/prune-boot')
$(call cp,results/$(BASENAME).squashfs,/media/startos/images/next.rootfs)
$(call ssh,'sudo CHECKSUM=$(SQFS_SUM) /usr/lib/startos/scripts/use-img /media/startos/images/next.rootfs')
emulate-reflash: $(ALL_TARGETS)
emulate-reflash: $(STARTOS_TARGETS)
@if [ -z "$(REMOTE)" ]; then >&2 echo "Must specify REMOTE" && false; fi
$(call ssh,'sudo /usr/lib/startos/scripts/chroot-and-upgrade --create')
$(MAKE) install REMOTE=$(REMOTE) SSHPASS=$(SSHPASS) DESTDIR=/media/startos/next PLATFORM=$(PLATFORM)
@@ -222,10 +266,14 @@ emulate-reflash: $(ALL_TARGETS)
upload-ota: results/$(BASENAME).squashfs
TARGET=$(TARGET) KEY=$(KEY) ./upload-ota.sh
container-runtime/debian.$(ARCH).squashfs:
container-runtime/debian.$(ARCH).squashfs: ./container-runtime/download-base-image.sh
ARCH=$(ARCH) ./container-runtime/download-base-image.sh
container-runtime/node_modules/.package-lock.json: container-runtime/package.json container-runtime/package-lock.json sdk/dist/package.json
container-runtime/package-lock.json: sdk/dist/package.json
npm --prefix container-runtime i
touch container-runtime/package-lock.json
container-runtime/node_modules/.package-lock.json: container-runtime/package-lock.json
npm --prefix container-runtime ci
touch container-runtime/node_modules/.package-lock.json
@@ -234,53 +282,45 @@ sdk/base/lib/osBindings/index.ts: $(shell if [ "$(REBUILD_TYPES)" -ne 0 ]; then
rsync -ac --delete core/startos/bindings/ sdk/base/lib/osBindings/
touch sdk/base/lib/osBindings/index.ts
core/startos/bindings/index.ts: $(shell git ls-files core) $(ENVIRONMENT_FILE)
rm -rf core/startos/bindings
core/startos/bindings/index.ts: $(call ls-files, core) $(ENVIRONMENT_FILE)
./core/build-ts.sh
ls core/startos/bindings/*.ts | sed 's/core\/startos\/bindings\/\([^.]*\)\.ts/export { \1 } from ".\/\1";/g' | grep -v '"./index"' | tee core/startos/bindings/index.ts
npm --prefix sdk exec -- prettier --config ./sdk/base/package.json -w ./core/startos/bindings/*.ts
touch core/startos/bindings/index.ts
sdk/dist/package.json sdk/baseDist/package.json: $(shell git ls-files sdk) sdk/base/lib/osBindings/index.ts
sdk/dist/package.json sdk/baseDist/package.json: $(call ls-files, sdk) sdk/base/lib/osBindings/index.ts
(cd sdk && make bundle)
touch sdk/dist/package.json
touch sdk/baseDist/package.json
# TODO: make container-runtime its own makefile?
container-runtime/dist/index.js: container-runtime/node_modules/.package-lock.json $(shell git ls-files container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
container-runtime/dist/index.js: container-runtime/node_modules/.package-lock.json $(call ls-files, container-runtime/src) container-runtime/package.json container-runtime/tsconfig.json
npm --prefix container-runtime run build
container-runtime/dist/node_modules/.package-lock.json container-runtime/dist/package.json container-runtime/dist/package-lock.json: container-runtime/package.json container-runtime/package-lock.json sdk/dist/package.json container-runtime/install-dist-deps.sh
./container-runtime/install-dist-deps.sh
touch container-runtime/dist/node_modules/.package-lock.json
container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(ARCH)-unknown-linux-musl/release/containerbox | sudo
ARCH=$(ARCH) ./container-runtime/update-image.sh
container-runtime/rootfs.$(ARCH).squashfs: container-runtime/debian.$(ARCH).squashfs container-runtime/container-runtime.service container-runtime/update-image.sh container-runtime/deb-install.sh container-runtime/dist/index.js container-runtime/dist/node_modules/.package-lock.json core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox
ARCH=$(ARCH) REQUIRES=linux ./build/os-compat/run-compat.sh ./container-runtime/update-image.sh
build/lib/depends build/lib/conflicts: build/dpkg-deps/*
build/dpkg-deps/generate.sh
build/lib/depends build/lib/conflicts: $(ENVIRONMENT_FILE) $(PLATFORM_FILE) $(shell ls build/dpkg-deps/*)
PLATFORM=$(PLATFORM) ARCH=$(ARCH) build/dpkg-deps/generate.sh
$(FIRMWARE_ROMS): build/lib/firmware.json download-firmware.sh $(PLATFORM_FILE)
./download-firmware.sh $(PLATFORM)
system-images/compat/docker-images/$(ARCH).tar: $(COMPAT_SRC)
cd system-images/compat && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE)
ARCH=$(ARCH) PROFILE=$(PROFILE) ./core/build-startbox.sh
touch core/target/$(RUST_ARCH)-unknown-linux-musl/$(PROFILE)/startbox
system-images/utils/docker-images/$(ARCH).tar: $(UTILS_SRC)
cd system-images/utils && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
system-images/binfmt/docker-images/$(ARCH).tar: $(BINFMT_SRC)
cd system-images/binfmt && make docker-images/$(ARCH).tar && touch docker-images/$(ARCH).tar
core/target/$(ARCH)-unknown-linux-musl/release/startbox: $(CORE_SRC) $(COMPRESSED_WEB_UIS) web/patchdb-ui-seed.json $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-startbox.sh
touch core/target/$(ARCH)-unknown-linux-musl/release/startbox
core/target/$(ARCH)-unknown-linux-musl/release/containerbox: $(CORE_SRC) $(ENVIRONMENT_FILE)
core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox: $(CORE_SRC) $(ENVIRONMENT_FILE)
ARCH=$(ARCH) ./core/build-containerbox.sh
touch core/target/$(ARCH)-unknown-linux-musl/release/containerbox
touch core/target/$(RUST_ARCH)-unknown-linux-musl/release/containerbox
web/node_modules/.package-lock.json: web/package.json sdk/baseDist/package.json
web/package-lock.json: web/package.json sdk/baseDist/package.json
npm --prefix web i
touch web/package-lock.json
web/node_modules/.package-lock.json: web/package-lock.json
npm --prefix web ci
touch web/node_modules/.package-lock.json
@@ -298,11 +338,15 @@ web/dist/raw/setup-wizard/index.html: $(WEB_SETUP_WIZARD_SRC) $(WEB_SHARED_SRC)
touch web/dist/raw/setup-wizard/index.html
web/dist/raw/install-wizard/index.html: $(WEB_INSTALL_WIZARD_SRC) $(WEB_SHARED_SRC) web/.angular/.updated
npm --prefix web run build:install-wiz
npm --prefix web run build:install
touch web/dist/raw/install-wizard/index.html
$(COMPRESSED_WEB_UIS): $(WEB_UIS) $(ENVIRONMENT_FILE)
./compress-uis.sh
web/dist/raw/start-tunnel/index.html: $(WEB_START_TUNNEL_SRC) $(WEB_SHARED_SRC) web/.angular/.updated
npm --prefix web run build:tunnel
touch web/dist/raw/start-tunnel/index.html
web/dist/static/%/index.html: web/dist/raw/%/index.html
./compress-uis.sh $*
web/config.json: $(GIT_HASH_FILE) web/config-sample.json
jq '.useMocks = false' web/config-sample.json | jq '.gitHash = "$(shell cat GIT_HASH.txt)"' > web/config.json
@@ -329,8 +373,11 @@ ui: web/dist/raw/ui
cargo-deps/aarch64-unknown-linux-musl/release/pi-beep:
ARCH=aarch64 ./build-cargo-dep.sh pi-beep
cargo-deps/$(ARCH)-unknown-linux-musl/release/tokio-console:
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/tokio-console:
ARCH=$(ARCH) PREINSTALL="apk add musl-dev pkgconfig" ./build-cargo-dep.sh tokio-console
cargo-deps/$(ARCH)-unknown-linux-musl/release/startos-backup-fs:
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/startos-backup-fs:
ARCH=$(ARCH) PREINSTALL="apk add fuse3 fuse3-dev fuse3-static musl-dev pkgconfig" ./build-cargo-dep.sh --git https://github.com/Start9Labs/start-fs.git startos-backup-fs
cargo-deps/$(RUST_ARCH)-unknown-linux-musl/release/flamegraph:
ARCH=$(ARCH) PREINSTALL="apk add musl-dev pkgconfig" ./build-cargo-dep.sh flamegraph

View File

@@ -13,9 +13,6 @@
<a href="https://twitter.com/start9labs">
<img alt="X (formerly Twitter) Follow" src="https://img.shields.io/twitter/follow/start9labs">
</a>
<a href="https://mastodon.start9labs.com">
<img src="https://img.shields.io/mastodon/follow/000000001?domain=https%3A%2F%2Fmastodon.start9labs.com&label=Follow&style=social">
</a>
<a href="https://matrix.to/#/#community:matrix.start9labs.com">
<img alt="Static Badge" src="https://img.shields.io/badge/community-matrix-yellow?logo=matrix">
</a>

63
START-TUNNEL.md Normal file
View File

@@ -0,0 +1,63 @@
# StartTunnel
A self-hosted Wiregaurd VPN optimized for creating VLANs and reverse tunneling to personal servers.
You can think of StartTunnel as "virtual router in the cloud"
Use it for private, remote access, to self-hosted services running on a personal server, or to expose self-hosted services to the public Internet without revealing the host server's IP address.
## Features
- **Create Subnets**: Each subnet creates a private, virtual local area network (VLAN), similar to the LAN created by a home router.
- **Add Devices**: When you add a device (server, phone, laptop) to a subnet, it receives a LAN IP address on that subnet as well as a unique Wireguard config that must be copied, downloaded, or scanned into the device.
- **Forward Ports**: Forwarding a port creates a "reverse tunnel", exposing a specific port on a specific device to the public Internet.
## Installation
1. Rent a low cost VPS. For most use cases, the cheapest option should be enough.
- It must have a dedicated public IP address.
- For (CPU), memory (RAM), and storage (disk), choose the minimum spec.
- For transfer (bandwidth), it depends on (1) your use case and (2) your home Internet's _upload_ speed. Even if you intend to serve large files or stream content from your server, there is no reason to pay for speeds that exceed your home Internet's upload speed.
1. Provision the VPS with the latest version of Debian.
1. Access the VPS via SSH.
1. Install StartTunnel:
```sh
TMP_DIR=$(mktemp -d) && (cd $TMP_DIR && wget https://github.com/Start9Labs/start-os/releases/download/v0.4.0-alpha.12/start-tunnel-0.4.0-alpha.12-68f401b_$(uname -m).deb && apt-get install -y ./start-tunnel-0.4.0-alpha.12-68f401b_$(uname -m).deb) && rm -rf $TMP_DIR && systemctl start start-tunneld && echo "Installation Succeeded"
```
5. [Initialize the web interface](#web-interface) (recommended)
## CLI
By default, StartTunnel is managed via the `start-tunnel` command line interface, which is self-documented.
```
start-tunnel --help
```
## Web Interface
If you choose to enable the web interface (recommended in most cases), StartTunnel can be accessed as a website from the browser, or programmatically via API.
1. Initialize the web interface.
start-tunnel web init
1. When prompted, select the IP address at which to host the web interface. In many cases, there will be only one IP address.
1. When prompted, enter the port at which to host the web interface. The default is 8443, and we recommend using it. If you change the default, choose an uncommon port to avoid conflicts.
1. Select whether to autogenerate a self-signed certificate or provide your own certificate and key. If you choose to autogenerate, you will be asked to list all IP addresses and domains for which to sign the certificate. For example, if you intend to access your StartTunnel web UI at a domain, include the domain in the list.
1. You will receive a success message with 3 pieces of information:
- <https://IP:port>: the URL where you can reach your personal web interface.
- Password: an autogenerated password for your interface. If you lose/forget it, you can reset using the CLI.
- Root Certificate Authority: the Root CA of your StartTunnel instance. If not already, trust it in your browser or system keychain.

View File

@@ -1,5 +1,7 @@
#!/bin/bash
PROJECT=${PROJECT:-"startos"}
cd "$(dirname "${BASH_SOURCE[0]}")"
PLATFORM="$(if [ -f ./PLATFORM.txt ]; then cat ./PLATFORM.txt; else echo unknown; fi)"
@@ -16,4 +18,4 @@ if [ -n "$STARTOS_ENV" ]; then
VERSION_FULL="$VERSION_FULL~${STARTOS_ENV}"
fi
echo -n "startos-${VERSION_FULL}_${PLATFORM}"
echo -n "${PROJECT}-${VERSION_FULL}_${PLATFORM}"

View File

@@ -9,7 +9,6 @@ ca-certificates
cifs-utils
cryptsetup
curl
dnsutils
dmidecode
dnsutils
dosfstools
@@ -19,6 +18,8 @@ exfatprogs
flashrom
fuse3
grub-common
grub-efi
grub2-common
htop
httpdirfs
iotop
@@ -36,13 +37,14 @@ man-db
ncdu
net-tools
network-manager
nfs-common
nvme-cli
nyx
openssh-server
podman
postgresql
psmisc
qemu-guest-agent
rfkill
rsync
samba-common-bin
smartmontools

View File

@@ -5,11 +5,15 @@ set -e
cd "$(dirname "${BASH_SOURCE[0]}")"
IFS="-" read -ra FEATURES <<< "$ENVIRONMENT"
FEATURES+=("${ARCH}")
if [ "$ARCH" != "$PLATFORM" ]; then
FEATURES+=("${PLATFORM}")
fi
feature_file_checker='
/^#/ { next }
/^\+ [a-z0-9]+$/ { next }
/^- [a-z0-9]+$/ { next }
/^\+ [a-z0-9.-]+$/ { next }
/^- [a-z0-9.-]+$/ { next }
{ exit 1 }
'
@@ -30,8 +34,8 @@ for type in conflicts depends; do
for feature in ${FEATURES[@]}; do
file="$feature.$type"
if [ -f $file ]; then
if grep "^- $pkg$" $file; then
SKIP=1
if grep "^- $pkg$" $file > /dev/null; then
SKIP=yes
fi
fi
done

View File

@@ -0,0 +1,12 @@
- grub-common
- grub-efi
- grub2-common
+ parted
+ raspberrypi-net-mods
+ raspberrypi-sys-mods
+ raspi-config
+ raspi-firmware
+ raspi-utils
+ rpi-eeprom
+ rpi-update
+ rpi.gpio-common

View File

@@ -1,2 +1,3 @@
+ gdb
+ heaptrack
+ heaptrack
+ linux-perf

View File

@@ -0,0 +1 @@
+ grub-pc-bin

View File

@@ -1,34 +1,123 @@
#!/bin/sh
printf "\n"
printf "Welcome to\n"
cat << "ASCII"
███████
█ █ █
█ █ █ █
█ █ █ █
█ █ █ █
█ █ █ █
█ █
███████
parse_essential_db_info() {
DB_DUMP="/tmp/startos_db.json"
_____ __ ___ __ __
(_ | /\ |__) | / \(_
__) | / \| \ | \__/__)
ASCII
printf " v$(cat /usr/lib/startos/VERSION.txt)\n\n"
printf " %s (%s %s)\n" "$(uname -o)" "$(uname -r)" "$(uname -m)"
printf " Git Hash: $(cat /usr/lib/startos/GIT_HASH.txt)"
if [ -n "$(cat /usr/lib/startos/ENVIRONMENT.txt)" ]; then
printf " ~ $(cat /usr/lib/startos/ENVIRONMENT.txt)\n"
else
printf "\n"
if command -v start-cli >/dev/null 2>&1; then
start-cli db dump > "$DB_DUMP" 2>/dev/null || return 1
else
return 1
fi
if command -v jq >/dev/null 2>&1 && [ -f "$DB_DUMP" ]; then
HOSTNAME=$(jq -r '.value.serverInfo.hostname // "unknown"' "$DB_DUMP" 2>/dev/null)
VERSION=$(jq -r '.value.serverInfo.version // "unknown"' "$DB_DUMP" 2>/dev/null)
RAM_BYTES=$(jq -r '.value.serverInfo.ram // 0' "$DB_DUMP" 2>/dev/null)
WAN_IP=$(jq -r '.value.serverInfo.network.gateways[].ipInfo.wanIp // "unknown"' "$DB_DUMP" 2>/dev/null | head -1)
NTP_SYNCED=$(jq -r '.value.serverInfo.ntpSynced // false' "$DB_DUMP" 2>/dev/null)
if [ "$RAM_BYTES" != "0" ] && [ "$RAM_BYTES" != "null" ]; then
RAM_GB=$(echo "scale=1; $RAM_BYTES / 1073741824" | bc 2>/dev/null || echo "unknown")
else
RAM_GB="unknown"
fi
RUNNING_SERVICES=$(jq -r '[.value.packageData[] | select(.status.main == "running")] | length' "$DB_DUMP" 2>/dev/null)
TOTAL_SERVICES=$(jq -r '.value.packageData | length' "$DB_DUMP" 2>/dev/null)
rm -f "$DB_DUMP"
return 0
else
rm -f "$DB_DUMP" 2>/dev/null
return 1
fi
}
DB_INFO_AVAILABLE=0
if parse_essential_db_info; then
DB_INFO_AVAILABLE=1
fi
printf "\n"
printf " * Documentation: https://docs.start9.com\n"
printf " * Management: https://%s.local\n" "$(hostname)"
printf " * Support: https://start9.com/contact\n"
printf " * Source Code: https://github.com/Start9Labs/start-os\n"
printf " * License: MIT\n"
printf "\n"
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$VERSION" != "unknown" ]; then
version_display="v$VERSION"
else
version_display="v$(cat /usr/lib/startos/VERSION.txt 2>/dev/null || echo 'unknown')"
fi
printf "\n\033[1;37m ▄▄▀▀▀▀▀▄▄\033[0m\n"
printf "\033[1;37m ▄▀ ▄ ▀▄ ▄▄▄▄▄ ▄▄▄▄▄▄▄ ▄ ▄▄▄▄▄ ▄▄▄▄▄▄▄ \033[1;31m▄██████▄ ▄██████\033[0m\n"
printf "\033[1;37m █ █ █ █ █ █ █ █ █ ▀▄ █ \033[1;31m██ ██ ██ \033[0m\n"
printf "\033[1;37m█ █ █ █ ▀▄▄▄▄ █ █ █ █ ▄▄▄▀ █ \033[1;31m██ ██ ▀█████▄\033[0m\n"
printf "\033[1;37m█ █ █ █ █ █ █ █ █ ▀▄ █ \033[1;31m██ ██ ██\033[0m\n"
printf "\033[1;37m █ █ █ █ ▄▄▄▄▄▀ █ █ █ █ ▀▄ █ \033[1;31m▀██████▀ ██████▀\033[0m\n"
printf "\033[1;37m █ █\033[0m\n"
printf "\033[1;37m ▀▀▄▄▄▀▀ $version_display\033[0m\n\n"
uptime_str=$(uptime | awk -F'up ' '{print $2}' | awk -F',' '{print $1}' | sed 's/^ *//')
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$RAM_GB" != "unknown" ]; then
memory_used=$(free -m | awk 'NR==2{printf "%.0fMB", $3}')
memory_display="$memory_used / ${RAM_GB}GB"
else
memory_display=$(free -m | awk 'NR==2{printf "%.0fMB / %.0fMB", $3, $2}')
fi
root_usage=$(df -h / | awk 'NR==2{printf "%s (%s free)", $5, $4}')
if [ -d "/media/startos/data/package-data" ]; then
data_usage=$(df -h /media/startos/data/package-data | awk 'NR==2{printf "%s (%s free)", $5, $4}')
else
data_usage="N/A"
fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ]; then
services_text="$RUNNING_SERVICES/$TOTAL_SERVICES running"
else
services_text="Unknown"
fi
local_ip=$(ip route get 1.1.1.1 2>/dev/null | awk '{for(i=1;i<=NF;i++) if($i=="src") print $(i+1)}' | head -1)
if [ -z "$local_ip" ]; then local_ip="N/A"; fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$WAN_IP" != "unknown" ]; then
wan_ip="$WAN_IP"
else
wan_ip="N/A"
fi
printf " \033[1;37m┌─ SYSTEM STATUS ───────────────────────────────────────────────────┐\033[0m\n"
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Uptime:" "$uptime_str" "Memory:" "$memory_display"
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Root:" "$root_usage" "Data:" "$data_usage"
if [ "$DB_INFO_AVAILABLE" -eq 1 ]; then
if [ "$RUNNING_SERVICES" -eq "$TOTAL_SERVICES" ] && [ "$TOTAL_SERVICES" -gt 0 ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;32m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
elif [ "$RUNNING_SERVICES" -gt 0 ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
else
printf " \033[1;37m│\033[0m %-8s \033[0;31m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
fi
else
printf " \033[1;37m│\033[0m %-8s \033[0;37m%-22s\033[0m %-8s \033[0;33m%-23s\033[0m \033[1;37m│\033[0m\n" "Services:" "$services_text" "WAN:" "$wan_ip"
fi
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$NTP_SYNCED" = "true" ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;32m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Synced"
elif [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$NTP_SYNCED" = "false" ]; then
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;31m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Not Synced"
else
printf " \033[1;37m│\033[0m %-8s \033[0;33m%-22s\033[0m %-8s \033[0;37m%-23s\033[0m \033[1;37m│\033[0m\n" "Local:" "$local_ip" "NTP:" "Unknown"
fi
printf " \033[1;37m└───────────────────────────────────────────────────────────────────┘\033[0m"
if [ "$DB_INFO_AVAILABLE" -eq 1 ] && [ "$HOSTNAME" != "unknown" ]; then
web_url="https://$HOSTNAME.local"
else
web_url="https://$(hostname).local"
fi
printf "\n \033[1;37m┌──────────────────────────────────────────────────── QUICK ACCESS ─┐\033[0m\n"
printf " \033[1;37m│\033[0m Web Interface: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "$web_url"
printf " \033[1;37m│\033[0m Documentation: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://staging.docs.start9.com"
printf " \033[1;37m│\033[0m Support: \033[0;36m%-50s\033[0m \033[1;37m│\033[0m\n" "https://start9.com/contact"
printf " \033[1;37m└───────────────────────────────────────────────────────────────────┘\033[0m\n\n"

View File

@@ -1,8 +0,0 @@
#!/bin/sh
if cat /sys/class/drm/*/status | grep -qw connected; then
exit 0
else
exit 1
fi

View File

@@ -1,6 +1,6 @@
#!/bin/bash
SOURCE_DIR="$(dirname "${BASH_SOURCE[0]}")"
SOURCE_DIR="$(dirname $(realpath "${BASH_SOURCE[0]}"))"
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'

View File

@@ -27,7 +27,6 @@ user_pref("browser.crashReports.unsubmittedCheck.autoSubmit2", false);
user_pref("browser.newtabpage.activity-stream.feeds.asrouterfeed", false);
user_pref("browser.newtabpage.activity-stream.feeds.topsites", false);
user_pref("browser.newtabpage.activity-stream.showSponsoredTopSites", false);
user_pref("browser.onboarding.enabled", false);
user_pref("browser.ping-centre.telemetry", false);
user_pref("browser.pocket.enabled", false);
user_pref("browser.safebrowsing.blockedURIs.enabled", false);
@@ -43,7 +42,7 @@ user_pref("browser.startup.homepage_override.mstone", "ignore");
user_pref("browser.theme.content-theme", 0);
user_pref("browser.theme.toolbar-theme", 0);
user_pref("browser.urlbar.groupLabels.enabled", false);
user_pref("browser.urlbar.suggest.searches" false);
user_pref("browser.urlbar.suggest.searches", false);
user_pref("datareporting.policy.firstRunURL", "");
user_pref("datareporting.healthreport.service.enabled", false);
user_pref("datareporting.healthreport.uploadEnabled", false);
@@ -52,10 +51,9 @@ user_pref("dom.securecontext.allowlist_onions", true);
user_pref("dom.securecontext.whitelist_onions", true);
user_pref("experiments.enabled", false);
user_pref("experiments.activeExperiment", false);
user_pref("experiments.supported", false);
user_pref("extensions.activeThemeID", "firefox-compact-dark@mozilla.org");
user_pref("extensions.blocklist.enabled", false);
user_pref("extensions.getAddons.cache.enabled", false);
user_pref("extensions.htmlaboutaddons.recommendations.enabled", false);
user_pref("extensions.pocket.enabled", false);
user_pref("extensions.update.enabled", false);
user_pref("extensions.shield-recipe-client.enabled", false);
@@ -66,9 +64,15 @@ user_pref("messaging-system.rsexperimentloader.enabled", false);
user_pref("network.allow-experiments", false);
user_pref("network.captive-portal-service.enabled", false);
user_pref("network.connectivity-service.enabled", false);
user_pref("network.proxy.autoconfig_url", "file:///usr/lib/startos/proxy.pac");
user_pref("network.proxy.socks", "10.0.3.1");
user_pref("network.proxy.socks_port", 9050);
user_pref("network.proxy.socks_version", 5);
user_pref("network.proxy.socks_remote_dns", true);
user_pref("network.proxy.type", 2);
user_pref("network.proxy.type", 1);
user_pref("privacy.resistFingerprinting", true);
//Enable letterboxing if we want the window size sent to the server to snap to common resolutions:
//user_pref("privacy.resistFingerprinting.letterboxing", true);
user_pref("privacy.trackingprotection.enabled", true);
user_pref("signon.rememberSignons", false);
user_pref("toolkit.telemetry.archive.enabled", false);
user_pref("toolkit.telemetry.bhrPing.enabled", false);
@@ -81,6 +85,17 @@ user_pref("toolkit.telemetry.shutdownPingSender.enabled", false);
user_pref("toolkit.telemetry.unified", false);
user_pref("toolkit.telemetry.updatePing.enabled", false);
user_pref("toolkit.telemetry.cachedClientID", "");
//Blocking automatic Mozilla CDN server requests
user_pref("extensions.getAddons.showPane", false);
user_pref("extensions.getAddons.cache.enabled", false);
//user_pref("services.settings.server", ""); // Remote settings server (HSTS preload updates and Cerfiticate Revocation Lists are fetched)
user_pref("browser.aboutHomeSnippets.updateUrl", "");
user_pref("browser.newtabpage.activity-stream.feeds.snippets", false);
user_pref("browser.newtabpage.activity-stream.feeds.section.topstories", false);
user_pref("browser.newtabpage.activity-stream.feeds.system.topstories", false);
user_pref("browser.newtabpage.activity-stream.feeds.discoverystreamfeed", false);
user_pref("browser.safebrowsing.provider.mozilla.updateURL", "");
user_pref("browser.safebrowsing.provider.mozilla.gethashURL", "");
EOF
ln -sf /usr/lib/$(uname -m)-linux-gnu/pkcs11/p11-kit-trust.so /usr/lib/firefox-esr/libnssckbi.so
@@ -91,15 +106,6 @@ cat > /home/kiosk/kiosk.sh << 'EOF'
while ! curl "http://localhost" > /dev/null; do
sleep 1
done
while ! /usr/lib/startos/scripts/check-monitor; do
sleep 15
done
(
while /usr/lib/startos/scripts/check-monitor; do
sleep 15
done
killall firefox-esr
) &
matchbox-window-manager -use_titlebar no &
cp -r /home/kiosk/fx-profile /home/kiosk/fx-profile-tmp
firefox-esr http://localhost --profile /home/kiosk/fx-profile-tmp

38
build/lib/scripts/forward-port Executable file
View File

@@ -0,0 +1,38 @@
#!/bin/bash
if [ -z "$sip" ] || [ -z "$dip" ] || [ -z "$sport" ] || [ -z "$dport" ]; then
>&2 echo 'missing required env var'
exit 1
fi
# Helper function to check if a rule exists
nat_rule_exists() {
iptables -t nat -C "$@" 2>/dev/null
}
# Helper function to add or delete a rule idempotently
# Usage: apply_rule [add|del] <iptables args...>
apply_nat_rule() {
local action="$1"
shift
if [ "$action" = "add" ]; then
# Only add if rule doesn't exist
if ! rule_exists "$@"; then
iptables -t nat -A "$@"
fi
elif [ "$action" = "del" ]; then
if rule_exists "$@"; then
iptables -t nat -D "$@"
fi
fi
}
if [ "$UNDO" = 1 ]; then
action="del"
else
action="add"
fi
apply_nat_rule "$action" PREROUTING -p tcp -d $sip --dport $sport -j DNAT --to-destination $dip:$dport
apply_nat_rule "$action" OUTPUT -p tcp -d $sip --dport $sport -j DNAT --to-destination $dip:$dport

35
build/lib/scripts/prune-boot Executable file
View File

@@ -0,0 +1,35 @@
#!/bin/bash
set -e
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
exit 1
fi
# Get the current kernel version
current_kernel=$(uname -r)
echo "Current kernel: $current_kernel"
echo "Searching for old kernel files in /boot..."
# Extract base kernel version (without possible suffixes)
current_base=$(echo "$current_kernel" | sed 's/-.*//')
cd /boot || { echo "/boot directory not found!"; exit 1; }
for file in vmlinuz-* initrd.img-* System.map-* config-*; do
# Extract version from filename
version=$(echo "$file" | sed -E 's/^[^0-9]*([0-9][^ ]*).*/\1/')
# Skip if file matches current kernel version
if [[ "$file" == *"$current_kernel"* ]]; then
continue
fi
# Compare versions, delete if less than current
if dpkg --compare-versions "$version" lt "$current_kernel"; then
echo "Deleting $file (version $version is older than $current_kernel)"
sudo rm -f "$file"
fi
done
echo "Old kernel files deleted."

View File

@@ -83,6 +83,7 @@ local_mount_root()
if [ -d "$image" ]; then
mount -r --bind $image /lower
elif [ -f "$image" ]; then
modprobe loop
modprobe squashfs
mount -r $image /lower
else

61
build/lib/scripts/use-img Executable file
View File

@@ -0,0 +1,61 @@
#!/bin/bash
set -e
if [ "$UID" -ne 0 ]; then
>&2 echo 'Must be run as root'
exit 1
fi
if [ -z "$1" ]; then
>&2 echo "usage: $0 <SQUASHFS>"
exit 1
fi
VERSION=$(unsquashfs -cat $1 /usr/lib/startos/VERSION.txt)
GIT_HASH=$(unsquashfs -cat $1 /usr/lib/startos/GIT_HASH.txt)
B3SUM=$(b3sum $1 | head -c 32)
if [ -n "$CHECKSUM" ] && [ "$CHECKSUM" != "$B3SUM" ]; then
>&2 echo "CHECKSUM MISMATCH"
exit 2
fi
mv $1 /media/startos/images/${B3SUM}.rootfs
ln -rsf /media/startos/images/${B3SUM}.rootfs /media/startos/config/current.rootfs
unsquashfs -n -f -d / /media/startos/images/${B3SUM}.rootfs boot
umount -R /media/startos/next 2> /dev/null || true
umount -R /media/startos/lower 2> /dev/null || true
umount -R /media/startos/upper 2> /dev/null || true
rm -rf /media/startos/lower /media/startos/upper /media/startos/next
mkdir /media/startos/upper
mount -t tmpfs tmpfs /media/startos/upper
mkdir -p /media/startos/lower /media/startos/upper/data /media/startos/upper/work /media/startos/next
mount /media/startos/images/${B3SUM}.rootfs /media/startos/lower
mount -t overlay \
-olowerdir=/media/startos/lower,upperdir=/media/startos/upper/data,workdir=/media/startos/upper/work \
overlay /media/startos/next
mkdir -p /media/startos/next/media/startos/root
mount --bind /media/startos/root /media/startos/next/media/startos/root
mkdir -p /media/startos/next/dev
mkdir -p /media/startos/next/sys
mkdir -p /media/startos/next/proc
mkdir -p /media/startos/next/boot
mount --bind /dev /media/startos/next/dev
mount --bind /sys /media/startos/next/sys
mount --bind /proc /media/startos/next/proc
mount --bind /boot /media/startos/next/boot
chroot /media/startos/next update-grub2
umount -R /media/startos/next
umount -R /media/startos/upper
umount -R /media/startos/lower
rm -rf /media/startos/lower /media/startos/upper /media/startos/next
sync
reboot

View File

@@ -0,0 +1,56 @@
FROM debian:bookworm
RUN apt-get update && \
apt-get install -y \
ca-certificates \
curl \
gpg \
build-essential \
sed \
grep \
gawk \
jq \
gzip \
brotli \
qemu-user-static \
binfmt-support \
squashfs-tools \
git \
debspawn \
rsync \
b3sum \
fuse-overlayfs \
sudo \
systemd \
systemd-container \
systemd-sysv \
dbus \
dbus-user-session
RUN systemctl mask \
systemd-firstboot.service \
systemd-udevd.service \
getty@tty1.service \
console-getty.service
RUN git config --global --add safe.directory /root/start-os
RUN mkdir -p /etc/debspawn && \
echo "AllowUnsafePermissions=true" > /etc/debspawn/global.toml
ENV NVM_DIR=~/.nvm
ENV NODE_VERSION=22
RUN mkdir -p $NVM_DIR && \
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/master/install.sh | bash && \
. $NVM_DIR/nvm.sh \
nvm install $NODE_VERSION && \
nvm use $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
ln -s $(which node) /usr/bin/node && \
ln -s $(which npm) /usr/bin/npm
RUN mkdir -p /root/start-os
WORKDIR /root/start-os
COPY docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT [ "/docker-entrypoint.sh" ]

View File

@@ -0,0 +1,3 @@
#!/bin/bash
exec /lib/systemd/systemd --unit=multi-user.target --show-status=false --log-target=journal

27
build/os-compat/run-compat.sh Executable file
View File

@@ -0,0 +1,27 @@
#!/bin/bash
if [ "$FORCE_COMPAT" = 1 ] || ( [ "$REQUIRES" = "linux" ] && [ "$(uname -s)" != "Linux" ] ) || ( [ "$REQUIRES" = "debian" ] && ! which dpkg > /dev/null ); then
project_pwd="$(cd "$(dirname "${BASH_SOURCE[0]}")"/../.. && pwd)/"
pwd="$(pwd)/"
if ! [[ "$pwd" = "$project_pwd"* ]]; then
>&2 echo "Must be run from start-os project dir"
exit 1
fi
rel_pwd="${pwd#"$project_pwd"}"
SYSTEMD_TTY="-P"
USE_TTY=
if tty -s; then
USE_TTY="-it"
SYSTEMD_TTY="-t"
fi
docker run -d --rm --name os-compat --privileged --security-opt apparmor=unconfined -v "${project_pwd}:/root/start-os" -v /lib/modules:/lib/modules:ro start9/build-env
while ! docker exec os-compat systemctl is-active --quiet multi-user.target 2> /dev/null; do sleep .5; done
docker exec -eARCH -eENVIRONMENT -ePLATFORM -eGIT_BRANCH_AS_HASH -ePROJECT -eDEPENDS -eCONFLICTS $USE_TTY -w "/root/start-os${rel_pwd}" os-compat $@
code=$?
docker stop os-compat
exit $code
else
exec $@
fi

View File

@@ -7,6 +7,7 @@ else
fi
if ! [ -f ./GIT_HASH.txt ] || [ "$(cat ./GIT_HASH.txt)" != "$GIT_HASH" ]; then
>&2 echo Git hash changed from "$([ -f ./GIT_HASH.txt ] && cat ./GIT_HASH.txt)" to "$GIT_HASH"
echo -n "$GIT_HASH" > ./GIT_HASH.txt
fi

View File

@@ -4,13 +4,17 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
rm -rf web/dist/static
STATIC_DIR=web/dist/static/$1
RAW_DIR=web/dist/raw/$1
mkdir -p $STATIC_DIR
rm -rf $STATIC_DIR
if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf
find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf
find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 gzip -kf
find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br' | xargs -n 1 -P 0 brotli -kf
for file in $(find web/dist/raw -type f -not -name '*.gz' -and -not -name '*.br'); do
for file in $(find $RAW_DIR -type f -not -name '*.gz' -and -not -name '*.br'); do
raw_size=$(du $file | awk '{print $1 * 512}')
gz_size=$(du $file.gz | awk '{print $1 * 512}')
br_size=$(du $file.br | awk '{print $1 * 512}')
@@ -23,4 +27,5 @@ if ! [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
done
fi
cp -r web/dist/raw web/dist/static
cp -r $RAW_DIR $STATIC_DIR

View File

@@ -0,0 +1,6 @@
[Unit]
Description=StartOS Container Runtime Failure Handler
[Service]
Type=oneshot
ExecStart=/usr/bin/start-container rebuild

View File

@@ -1,11 +1,11 @@
[Unit]
Description=StartOS Container Runtime
OnFailure=container-runtime-failure.service
[Service]
Type=simple
ExecStart=/usr/bin/node --experimental-detect-module --unhandled-rejections=warn /usr/lib/startos/init/index.js
Restart=always
RestartSec=3
ExecStart=/usr/bin/node --experimental-detect-module --trace-warnings --unhandled-rejections=warn /usr/lib/startos/init/index.js
Restart=no
[Install]
WantedBy=multi-user.target

View File

@@ -6,13 +6,9 @@ mkdir -p /run/systemd/resolve
echo "nameserver 8.8.8.8" > /run/systemd/resolve/stub-resolv.conf
apt-get update
apt-get install -y curl rsync qemu-user-static
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install 20
ln -s $(which node) /usr/bin/node
apt-get install -y curl rsync qemu-user-static nodejs
sed -i '/\(^\|#\)DNSStubListener=/c\DNSStubListener=no' /etc/systemd/resolved.conf
sed -i '/\(^\|#\)Storage=/c\Storage=persistent' /etc/systemd/journald.conf
sed -i '/\(^\|#\)Compress=/c\Compress=yes' /etc/systemd/journald.conf
sed -i '/\(^\|#\)SystemMaxUse=/c\SystemMaxUse=1G' /etc/systemd/journald.conf
@@ -20,4 +16,7 @@ sed -i '/\(^\|#\)ForwardToSyslog=/c\ForwardToSyslog=no' /etc/systemd/journald.co
systemctl enable container-runtime.service
rm -rf /run/systemd
rm -rf /run/systemd
rm -f /etc/resolv.conf
echo "nameserver 10.0.3.1" > /etc/resolv.conf

View File

@@ -1,11 +1,9 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
DISTRO=debian
VERSION=bookworm
VERSION=trixie
ARCH=${ARCH:-$(uname -m)}
FLAVOR=default
@@ -16,8 +14,9 @@ elif [ "$_ARCH" = "aarch64" ]; then
_ARCH=arm64
fi
URL="https://images.linuxcontainers.org/$(curl -fsSL https://images.linuxcontainers.org/meta/1.0/index-system | grep "^$DISTRO;$VERSION;$_ARCH;$FLAVOR;" | head -n1 | sed 's/^.*;//g')/rootfs.squashfs"
BASE_URL="https://images.linuxcontainers.org$(curl -fsSL https://images.linuxcontainers.org/meta/1.0/index-system | grep "^$DISTRO;$VERSION;$_ARCH;$FLAVOR;" | head -n1 | sed 's/^.*;//g')"
OUTPUT_FILE="debian.${ARCH}.squashfs"
echo "Downloading $URL to debian.${ARCH}.squashfs"
curl -fsSL "$URL" > debian.${ARCH}.squashfs
echo "Downloading ${BASE_URL}/rootfs.squashfs to $OUTPUT_FILE"
curl -fsSL "${BASE_URL}/rootfs.squashfs" > "$OUTPUT_FILE"
curl -fsSL "$BASE_URL/SHA256SUMS" | grep 'rootfs\.squashfs' | awk '{print $1" '"$OUTPUT_FILE"'"}' | shasum -a 256 -c

File diff suppressed because it is too large Load Diff

View File

@@ -26,8 +26,9 @@
"isomorphic-fetch": "^3.0.0",
"jsonpath": "^1.1.1",
"lodash.merge": "^4.6.2",
"mime": "^4.0.7",
"node-fetch": "^3.1.0",
"ts-matches": "^5.5.1",
"ts-matches": "^6.3.2",
"tslib": "^2.5.3",
"typescript": "^5.1.3",
"yaml": "^2.3.1"

View File

@@ -1,4 +1,9 @@
import { types as T, utils } from "@start9labs/start-sdk"
import {
ExtendedVersion,
types as T,
utils,
VersionRange,
} from "@start9labs/start-sdk"
import * as net from "net"
import { object, string, number, literals, some, unknown } from "ts-matches"
import { Effects } from "../Models/Effects"
@@ -6,23 +11,19 @@ import { Effects } from "../Models/Effects"
import { CallbackHolder } from "../Models/CallbackHolder"
import { asError } from "@start9labs/start-sdk/base/lib/util"
const matchRpcError = object({
error: object(
{
code: number,
message: string,
data: some(
string,
object(
{
details: string,
debug: string,
},
["debug"],
),
),
},
["data"],
),
error: object({
code: number,
message: string,
data: some(
string,
object({
details: string,
debug: string.nullable().optional(),
}),
)
.nullable()
.optional(),
}),
})
const testRpcError = matchRpcError.test
const testRpcResult = object({
@@ -34,13 +35,13 @@ const SOCKET_PATH = "/media/startos/rpc/host.sock"
let hostSystemId = 0
export type EffectContext = {
procedureId: string | null
eventId: string | null
callbacks?: CallbackHolder
constRetry?: () => void
}
const rpcRoundFor =
(procedureId: string | null) =>
(eventId: string | null) =>
<K extends T.EffectMethod | "clearCallbacks">(
method: K,
params: Record<string, unknown>,
@@ -51,7 +52,7 @@ const rpcRoundFor =
JSON.stringify({
id,
method,
params: { ...params, procedureId: procedureId || undefined },
params: { ...params, eventId: eventId ?? undefined },
}) + "\n",
)
})
@@ -102,9 +103,21 @@ const rpcRoundFor =
}
export function makeEffects(context: EffectContext): Effects {
const rpcRound = rpcRoundFor(context.procedureId)
const rpcRound = rpcRoundFor(context.eventId)
const self: Effects = {
eventId: context.eventId,
child: (name) =>
makeEffects({ ...context, callbacks: context.callbacks?.child(name) }),
constRetry: context.constRetry,
isInContext: !!context.callbacks,
onLeaveContext:
context.callbacks?.onLeaveContext?.bind(context.callbacks) ||
(() => {
console.warn(
"no context for this effects object",
new Error().stack?.replace(/^Error/, ""),
)
}),
clearCallbacks(...[options]: Parameters<T.Effects["clearCallbacks"]>) {
return rpcRound("clear-callbacks", {
...options,
@@ -126,22 +139,20 @@ export function makeEffects(context: EffectContext): Effects {
...options,
}) as ReturnType<T.Effects["action"]["getInput"]>
},
request(...[options]: Parameters<T.Effects["action"]["request"]>) {
return rpcRound("action.request", {
createTask(...[options]: Parameters<T.Effects["action"]["createTask"]>) {
return rpcRound("action.create-task", {
...options,
}) as ReturnType<T.Effects["action"]["request"]>
}) as ReturnType<T.Effects["action"]["createTask"]>
},
run(...[options]: Parameters<T.Effects["action"]["run"]>) {
return rpcRound("action.run", {
...options,
}) as ReturnType<T.Effects["action"]["run"]>
},
clearRequests(
...[options]: Parameters<T.Effects["action"]["clearRequests"]>
) {
return rpcRound("action.clear-requests", {
clearTasks(...[options]: Parameters<T.Effects["action"]["clearTasks"]>) {
return rpcRound("action.clear-tasks", {
...options,
}) as ReturnType<T.Effects["action"]["clearRequests"]>
}) as ReturnType<T.Effects["action"]["clearTasks"]>
},
},
bind(...[options]: Parameters<T.Effects["bind"]>) {
@@ -186,13 +197,6 @@ export function makeEffects(context: EffectContext): Effects {
T.Effects["exportServiceInterface"]
>
}) as Effects["exportServiceInterface"],
exposeForDependents(
...[options]: Parameters<T.Effects["exposeForDependents"]>
) {
return rpcRound("expose-for-dependents", options) as ReturnType<
T.Effects["exposeForDependents"]
>
},
getContainerIp(...[options]: Parameters<T.Effects["getContainerIp"]>) {
return rpcRound("get-container-ip", options) as ReturnType<
T.Effects["getContainerIp"]
@@ -254,6 +258,7 @@ export function makeEffects(context: EffectContext): Effects {
return rpcRound("mount", options) as ReturnType<T.Effects["mount"]>
},
restart(...[]: Parameters<T.Effects["restart"]>) {
console.log("Restarting service...")
return rpcRound("restart", {}) as ReturnType<T.Effects["restart"]>
},
setDependencies(
@@ -293,15 +298,6 @@ export function makeEffects(context: EffectContext): Effects {
shutdown(...[]: Parameters<T.Effects["shutdown"]>) {
return rpcRound("shutdown", {}) as ReturnType<T.Effects["shutdown"]>
},
store: {
get: async (options: any) =>
rpcRound("store.get", {
...options,
callback: context.callbacks?.addCallback(options.callback) || null,
}) as any,
set: async (options: any) =>
rpcRound("store.set", options) as ReturnType<T.Effects["store"]["set"]>,
} as T.Effects["store"],
getDataVersion() {
return rpcRound("get-data-version", {}) as ReturnType<
T.Effects["getDataVersion"]
@@ -313,5 +309,15 @@ export function makeEffects(context: EffectContext): Effects {
>
},
}
if (context.callbacks?.onLeaveContext)
self.onLeaveContext(() => {
self.isInContext = false
self.onLeaveContext = () => {
console.warn(
"this effects object is already out of context",
new Error().stack?.replace(/^Error/, ""),
)
}
})
return self
}

View File

@@ -12,9 +12,15 @@ import {
any,
shape,
anyOf,
literals,
} from "ts-matches"
import { types as T, utils } from "@start9labs/start-sdk"
import {
ExtendedVersion,
types as T,
utils,
VersionRange,
} from "@start9labs/start-sdk"
import * as fs from "fs"
import { CallbackHolder } from "../Models/CallbackHolder"
@@ -26,20 +32,16 @@ type MaybePromise<T> = T | Promise<T>
export const matchRpcResult = anyOf(
object({ result: any }),
object({
error: object(
{
code: number,
message: string,
data: object(
{
details: string,
debug: any,
},
["details", "debug"],
),
},
["data"],
),
error: object({
code: number,
message: string,
data: object({
details: string.optional(),
debug: any.optional(),
})
.nullable()
.optional(),
}),
}),
)
@@ -54,38 +56,26 @@ const isResult = object({ result: any }).test
const idType = some(string, number, literal(null))
type IdType = null | string | number | undefined
const runType = object(
{
id: idType,
method: literal("execute"),
params: object(
{
id: string,
procedure: string,
input: any,
timeout: number,
},
["timeout"],
),
},
["id"],
)
const sandboxRunType = object(
{
id: idType,
method: literal("sandbox"),
params: object(
{
id: string,
procedure: string,
input: any,
timeout: number,
},
["timeout"],
),
},
["id"],
)
const runType = object({
id: idType.optional(),
method: literal("execute"),
params: object({
id: string,
procedure: string,
input: any,
timeout: number.nullable().optional(),
}),
})
const sandboxRunType = object({
id: idType.optional(),
method: literal("sandbox"),
params: object({
id: string,
procedure: string,
input: any,
timeout: number.nullable().optional(),
}),
})
const callbackType = object({
method: literal("callback"),
params: object({
@@ -93,44 +83,37 @@ const callbackType = object({
args: array,
}),
})
const initType = object(
{
id: idType,
method: literal("init"),
},
["id"],
)
const startType = object(
{
id: idType,
method: literal("start"),
},
["id"],
)
const stopType = object(
{
id: idType,
method: literal("stop"),
},
["id"],
)
const exitType = object(
{
id: idType,
method: literal("exit"),
},
["id"],
)
const evalType = object(
{
id: idType,
method: literal("eval"),
params: object({
script: string,
}),
},
["id"],
)
const initType = object({
id: idType.optional(),
method: literal("init"),
params: object({
id: string,
kind: literals("install", "update", "restore").nullable(),
}),
})
const startType = object({
id: idType.optional(),
method: literal("start"),
})
const stopType = object({
id: idType.optional(),
method: literal("stop"),
})
const exitType = object({
id: idType.optional(),
method: literal("exit"),
params: object({
id: string,
target: string.nullable(),
}),
})
const evalType = object({
id: idType.optional(),
method: literal("eval"),
params: object({
script: string,
}),
})
const jsonParse = (x: string) => JSON.parse(x)
@@ -171,6 +154,8 @@ export class RpcListener {
if (!fs.existsSync(SOCKET_PARENT)) {
fs.mkdirSync(SOCKET_PARENT, { recursive: true })
}
if (fs.existsSync(SOCKET_PATH)) fs.rmSync(SOCKET_PATH, { force: true })
this.unixSocketServer.listen(SOCKET_PATH)
this.unixSocketServer.on("connection", (s) => {
@@ -238,21 +223,6 @@ export class RpcListener {
return this._system
}
private callbackHolders: Map<string, CallbackHolder> = new Map()
private removeCallbackHolderFor(procedure: string) {
const prev = this.callbackHolders.get(procedure)
if (prev) {
this.callbackHolders.delete(procedure)
this.callbacks?.removeChild(prev)
}
}
private callbackHolderFor(procedure: string): CallbackHolder {
this.removeCallbackHolderFor(procedure)
const callbackHolder = this.callbacks!.child()
this.callbackHolders.set(procedure, callbackHolder)
return callbackHolder
}
callCallback(callback: number, args: any[]): void {
if (this.callbacks) {
this.callbacks
@@ -272,11 +242,11 @@ export class RpcListener {
.when(runType, async ({ id, params }) => {
const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure)
const { input, timeout, id: procedureId } = params
const { input, timeout, id: eventId } = params
const result = this.getResult(
procedure,
system,
procedureId,
eventId,
timeout,
input,
)
@@ -286,11 +256,11 @@ export class RpcListener {
.when(sandboxRunType, async ({ id, params }) => {
const system = this.system
const procedure = jsonPath.unsafeCast(params.procedure)
const { input, timeout, id: procedureId } = params
const { input, timeout, id: eventId } = params
const result = this.getResult(
procedure,
system,
procedureId,
eventId,
timeout,
input,
)
@@ -302,9 +272,10 @@ export class RpcListener {
return null
})
.when(startType, async ({ id }) => {
const callbacks = this.callbackHolderFor("main")
const callbacks =
this.callbacks?.getChild("main") || this.callbacks?.child("main")
const effects = makeEffects({
procedureId: null,
eventId: null,
callbacks,
})
return handleRpc(
@@ -313,21 +284,35 @@ export class RpcListener {
)
})
.when(stopType, async ({ id }) => {
this.removeCallbackHolderFor("main")
this.callbacks?.removeChild("main")
return handleRpc(
id,
this.system.stop().then((result) => ({ result })),
)
})
.when(exitType, async ({ id }) => {
.when(exitType, async ({ id, params }) => {
return handleRpc(
id,
(async () => {
if (this._system) await this._system.exit()
if (this._system) {
let target = null
if (params.target)
try {
target = ExtendedVersion.parse(params.target)
} catch (_) {
target = VersionRange.parse(params.target).normalize()
}
await this._system.exit(
makeEffects({
eventId: params.id,
}),
target,
)
}
})().then((result) => ({ result })),
)
})
.when(initType, async ({ id }) => {
.when(initType, async ({ id, params }) => {
return handleRpc(
id,
(async () => {
@@ -335,16 +320,19 @@ export class RpcListener {
const system = await this.getDependencies.system()
this.callbacks = new CallbackHolder(
makeEffects({
procedureId: null,
eventId: params.id,
}),
)
const callbacks = this.callbackHolderFor("containerInit")
await system.containerInit(
const callbacks = this.callbacks.child("init")
console.error("Initializing...")
await system.init(
makeEffects({
procedureId: null,
eventId: params.id,
callbacks,
}),
params.kind,
)
console.error("Initialization complete.")
this._system = system
}
})().then((result) => ({ result })),
@@ -377,7 +365,7 @@ export class RpcListener {
)
})
.when(
shape({ id: idType, method: string }, ["id"]),
shape({ id: idType.optional(), method: string }),
({ id, method }) => ({
jsonrpc,
id,
@@ -411,8 +399,8 @@ export class RpcListener {
private getResult(
procedure: typeof jsonPath._TYPE,
system: System,
procedureId: string,
timeout: number | undefined,
eventId: string,
timeout: number | null | undefined,
input: any,
) {
const ensureResultTypeShape = (
@@ -420,9 +408,9 @@ export class RpcListener {
): { result: any } => {
return { result }
}
const callbacks = this.callbackHolderFor(procedure)
const callbacks = this.callbacks?.child(procedure)
const effects = makeEffects({
procedureId,
eventId,
callbacks,
})
@@ -430,16 +418,6 @@ export class RpcListener {
switch (procedure) {
case "/backup/create":
return system.createBackup(effects, timeout || null)
case "/backup/restore":
return system.restoreBackup(effects, timeout || null)
case "/packageInit":
return system.packageInit(effects, timeout || null)
case "/packageUninit":
return system.packageUninit(
effects,
string.optional().unsafeCast(input),
timeout || null,
)
default:
const procedures = unNestPath(procedure)
switch (true) {
@@ -461,14 +439,10 @@ export class RpcListener {
})().then(ensureResultTypeShape, (error) =>
matches(error)
.when(
object(
{
error: string,
code: number,
},
["code"],
{ code: 0 },
),
object({
error: string,
code: number.defaultTo(0),
}),
(error) => ({
error: {
code: error.code,

View File

@@ -7,13 +7,22 @@ import { Volume } from "./matchVolume"
import {
CommandOptions,
ExecOptions,
ExecSpawnable,
SubContainerOwned,
} from "@start9labs/start-sdk/package/lib/util/SubContainer"
import { Mounts } from "@start9labs/start-sdk/package/lib/mainFn/Mounts"
import { Manifest } from "@start9labs/start-sdk/base/lib/osBindings"
import { BackupEffects } from "@start9labs/start-sdk/package/lib/backup/Backups"
import { Drop } from "@start9labs/start-sdk/package/lib/util"
import { SDKManifest } from "@start9labs/start-sdk/base/lib/types"
export const exec = promisify(cp.exec)
export const execFile = promisify(cp.execFile)
export class DockerProcedureContainer {
private constructor(private readonly subcontainer: ExecSpawnable) {}
export class DockerProcedureContainer extends Drop {
private constructor(
private readonly subcontainer: SubContainer<SDKManifest>,
) {
super()
}
static async of(
effects: T.Effects,
@@ -21,7 +30,7 @@ export class DockerProcedureContainer {
data: DockerProcedure,
volumes: { [id: VolumeId]: Volume },
name: string,
options: { subcontainer?: ExecSpawnable } = {},
options: { subcontainer?: SubContainer<SDKManifest> } = {},
) {
const subcontainer =
options?.subcontainer ??
@@ -41,9 +50,10 @@ export class DockerProcedureContainer {
volumes: { [id: VolumeId]: Volume },
name: string,
) {
const subcontainer = await SubContainer.of(
effects,
const subcontainer = await SubContainerOwned.of(
effects as BackupEffects,
{ imageId: data.image },
null,
name,
)
@@ -57,13 +67,19 @@ export class DockerProcedureContainer {
const volumeMount = volumes[mount]
if (volumeMount.type === "data") {
await subcontainer.mount(
{ type: "volume", id: mount, subpath: null, readonly: false },
mounts[mount],
Mounts.of().mountVolume({
volumeId: mount,
subpath: null,
mountpoint: mounts[mount],
readonly: false,
}),
)
} else if (volumeMount.type === "assets") {
await subcontainer.mount(
{ type: "assets", subpath: mount },
mounts[mount],
Mounts.of().mountAssets({
subpath: mount,
mountpoint: mounts[mount],
}),
)
} else if (volumeMount.type === "certificate") {
const hostnames = [
@@ -95,21 +111,22 @@ export class DockerProcedureContainer {
key,
)
} else if (volumeMount.type === "pointer") {
await effects
.mount({
location: path,
target: {
packageId: volumeMount["package-id"],
subpath: volumeMount.path,
readonly: volumeMount.readonly,
volumeId: volumeMount["volume-id"],
},
})
.catch(console.warn)
await effects.mount({
location: path,
target: {
packageId: volumeMount["package-id"],
subpath: volumeMount.path,
readonly: volumeMount.readonly,
volumeId: volumeMount["volume-id"],
filetype: "directory",
},
})
} else if (volumeMount.type === "backup") {
await subcontainer.mount(
{ type: "backup", subpath: null },
mounts[mount],
Mounts.of().mountBackups({
subpath: null,
mountpoint: mounts[mount],
}),
)
}
}
@@ -151,7 +168,11 @@ export class DockerProcedureContainer {
}
}
async spawn(commands: string[]): Promise<cp.ChildProcess> {
return await this.subcontainer.spawn(commands)
// async spawn(commands: string[]): Promise<cp.ChildProcess> {
// return await this.subcontainer.spawn(commands)
// }
onDrop(): void {
this.subcontainer.destroy?.()
}
}

View File

@@ -6,6 +6,8 @@ import { Daemon } from "@start9labs/start-sdk/package/lib/mainFn/Daemon"
import { Effects } from "../../../Models/Effects"
import { off } from "node:process"
import { CommandController } from "@start9labs/start-sdk/package/lib/mainFn/CommandController"
import { SDKManifest } from "@start9labs/start-sdk/base/lib/types"
import { SubContainerRc } from "@start9labs/start-sdk/package/lib/util/SubContainer"
const EMBASSY_HEALTH_INTERVAL = 15 * 1000
const EMBASSY_PROPERTIES_LOOP = 30 * 1000
@@ -15,8 +17,13 @@ const EMBASSY_PROPERTIES_LOOP = 30 * 1000
* Also, this has an ability to clean itself up too if need be.
*/
export class MainLoop {
private subcontainerRc?: SubContainerRc<SDKManifest>
get mainSubContainerHandle() {
return this.mainEvent?.daemon?.subContainerHandle
this.subcontainerRc =
this.subcontainerRc ??
this.mainEvent?.daemon?.subcontainerRc() ??
undefined
return this.subcontainerRc
}
private healthLoops?: {
name: string
@@ -24,7 +31,7 @@ export class MainLoop {
}[]
private mainEvent?: {
daemon: Daemon
daemon: Daemon<SDKManifest>
}
private constructor(
@@ -55,28 +62,20 @@ export class MainLoop {
if (jsMain) {
throw new Error("Unreachable")
}
const daemon = new Daemon(async () => {
const subcontainer = await DockerProcedureContainer.createSubContainer(
effects,
this.system.manifest.id,
this.system.manifest.main,
this.system.manifest.volumes,
`Main - ${currentCommand.join(" ")}`,
)
return CommandController.of()(
this.effects,
subcontainer,
currentCommand,
{
runAsInit: true,
env: {
TINI_SUBREAPER: "true",
},
sigtermTimeout: utils.inMs(
this.system.manifest.main["sigterm-timeout"],
),
},
)
const subcontainer = await DockerProcedureContainer.createSubContainer(
effects,
this.system.manifest.id,
this.system.manifest.main,
this.system.manifest.volumes,
`Main - ${currentCommand.join(" ")}`,
)
const daemon = await Daemon.of()(this.effects, subcontainer, {
command: currentCommand,
runAsInit: true,
env: {
TINI_SUBREAPER: "true",
},
sigtermTimeout: utils.inMs(this.system.manifest.main["sigterm-timeout"]),
})
daemon.start()

View File

@@ -0,0 +1,153 @@
export default {
nodes: {
type: "list",
subtype: "union",
name: "Lightning Nodes",
description: "List of Lightning Network node instances to manage",
range: "[1,*)",
default: ["lnd"],
spec: {
type: "string",
"display-as": "{{name}}",
"unique-by": "name",
name: "Node Implementation",
tag: {
id: "type",
name: "Type",
description:
"- LND: Lightning Network Daemon from Lightning Labs\n- CLN: Core Lightning from Blockstream\n",
"variant-names": {
lnd: "Lightning Network Daemon (LND)",
"c-lightning": "Core Lightning (CLN)",
},
},
default: "lnd",
variants: {
lnd: {
name: {
type: "string",
name: "Node Name",
description: "Name of this node in the list",
default: "StartOS LND",
nullable: false,
},
"connection-settings": {
type: "union",
name: "Connection Settings",
description: "The Lightning Network Daemon node to connect to.",
tag: {
id: "type",
name: "Type",
description:
"- Internal: The Lightning Network Daemon service installed to your StartOS server.\n- External: A Lightning Network Daemon instance running on a remote device (advanced).\n",
"variant-names": {
internal: "Internal",
external: "External",
},
},
default: "internal",
variants: {
internal: {},
external: {
address: {
type: "string",
name: "Public Address",
description:
"The public address of your LND REST server\nNOTE: RTL does not support a .onion URL here\n",
nullable: false,
},
"rest-port": {
type: "number",
name: "REST Port",
description:
"The port that your Lightning Network Daemon REST server is bound to",
nullable: false,
range: "[0,65535]",
integral: true,
default: 8080,
},
macaroon: {
type: "string",
name: "Macaroon",
description:
'Your admin.macaroon file, Base64URL encoded. This is the same as the value after "macaroon=" in your lndconnect URL.',
nullable: false,
masked: true,
pattern: "[=A-Za-z0-9_-]+",
"pattern-description":
"Macaroon must be encoded in Base64URL format (only A-Z, a-z, 0-9, _, - and = allowed)",
},
},
},
},
},
"c-lightning": {
name: {
type: "string",
name: "Node Name",
description: "Name of this node in the list",
default: "StartOS CLN",
nullable: false,
},
"connection-settings": {
type: "union",
name: "Connection Settings",
description: "The Core Lightning (CLN) node to connect to.",
tag: {
id: "type",
name: "Type",
description:
"- Internal: The Core Lightning (CLN) service installed to your StartOS server.\n- External: A Core Lightning (CLN) instance running on a remote device (advanced).\n",
"variant-names": {
internal: "Internal",
external: "External",
},
},
default: "internal",
variants: {
internal: {},
external: {
address: {
type: "string",
name: "Public Address",
description:
"The public address of your CLNRest server\nNOTE: RTL does not support a .onion URL here\n",
nullable: false,
},
"rest-port": {
type: "number",
name: "CLNRest Port",
description: "The port that your CLNRest server is bound to",
nullable: false,
range: "[0,65535]",
integral: true,
default: 3010,
},
macaroon: {
type: "string",
name: "Rune",
description:
"Your CLNRest unrestricted Rune, Base64URL encoded.",
nullable: false,
masked: true,
},
},
},
},
},
},
},
},
password: {
type: "string",
name: "Password",
description: "The password for your Ride the Lightning dashboard",
nullable: false,
copyable: true,
masked: true,
default: {
charset: "a-z,A-Z,0-9",
len: 22,
},
},
}

View File

@@ -1,5 +1,30 @@
// Jest Snapshot v1, https://goo.gl/fbAQLP
exports[`transformConfigSpec transformConfigSpec(RTL) 1`] = `
{
"password": {
"default": {
"charset": "a-z,A-Z,0-9",
"len": 22,
},
"description": "The password for your Ride the Lightning dashboard",
"disabled": false,
"generate": null,
"immutable": false,
"inputmode": "text",
"masked": true,
"maxLength": null,
"minLength": null,
"name": "Password",
"patterns": [],
"placeholder": null,
"required": true,
"type": "text",
"warning": null,
},
}
`;
exports[`transformConfigSpec transformConfigSpec(bitcoind) 1`] = `
{
"advanced": {

View File

@@ -1,5 +1,8 @@
import {
ExtendedVersion,
FileHelper,
getDataVersion,
overlaps,
types as T,
utils,
VersionRange,
@@ -55,8 +58,21 @@ function todo(): never {
const MANIFEST_LOCATION = "/usr/lib/startos/package/embassyManifest.json"
export const EMBASSY_JS_LOCATION = "/usr/lib/startos/package/embassy.js"
const EMBASSY_POINTER_PATH_PREFIX = "/embassyConfig" as utils.StorePath
const EMBASSY_DEPENDS_ON_PATH_PREFIX = "/embassyDependsOn" as utils.StorePath
const configFile = FileHelper.json(
{
volumeId: "embassy",
subpath: "config.json",
},
matches.any,
)
const dependsOnFile = FileHelper.json(
{
volumeId: "embassy",
subpath: "dependsOn.json",
},
dictionary([string, array(string)]),
)
const matchResult = object({
result: any,
@@ -94,47 +110,48 @@ const fromReturnType = <A>(a: U.ResultType<A>): A => {
return assertNever(a)
}
const matchSetResult = object(
{
"depends-on": dictionary([string, array(string)]),
dependsOn: dictionary([string, array(string)]),
signal: literals(
"SIGTERM",
"SIGHUP",
"SIGINT",
"SIGQUIT",
"SIGILL",
"SIGTRAP",
"SIGABRT",
"SIGBUS",
"SIGFPE",
"SIGKILL",
"SIGUSR1",
"SIGSEGV",
"SIGUSR2",
"SIGPIPE",
"SIGALRM",
"SIGSTKFLT",
"SIGCHLD",
"SIGCONT",
"SIGSTOP",
"SIGTSTP",
"SIGTTIN",
"SIGTTOU",
"SIGURG",
"SIGXCPU",
"SIGXFSZ",
"SIGVTALRM",
"SIGPROF",
"SIGWINCH",
"SIGIO",
"SIGPWR",
"SIGSYS",
"SIGINFO",
),
},
["depends-on", "dependsOn"],
)
const matchSetResult = object({
"depends-on": dictionary([string, array(string)])
.nullable()
.optional(),
dependsOn: dictionary([string, array(string)])
.nullable()
.optional(),
signal: literals(
"SIGTERM",
"SIGHUP",
"SIGINT",
"SIGQUIT",
"SIGILL",
"SIGTRAP",
"SIGABRT",
"SIGBUS",
"SIGFPE",
"SIGKILL",
"SIGUSR1",
"SIGSEGV",
"SIGUSR2",
"SIGPIPE",
"SIGALRM",
"SIGSTKFLT",
"SIGCHLD",
"SIGCONT",
"SIGSTOP",
"SIGTSTP",
"SIGTTIN",
"SIGTTOU",
"SIGURG",
"SIGXCPU",
"SIGXFSZ",
"SIGVTALRM",
"SIGPROF",
"SIGWINCH",
"SIGIO",
"SIGPWR",
"SIGSYS",
"SIGINFO",
),
})
type OldGetConfigRes = {
config?: null | Record<string, unknown>
@@ -174,14 +191,14 @@ export type PackagePropertiesV2 = {
}
export type PackagePropertyString = {
type: "string"
description?: string
description?: string | null
value: string
/** Let's the ui make this copyable button */
copyable?: boolean
copyable?: boolean | null
/** Let the ui create a qr for this field */
qr?: boolean
qr?: boolean | null
/** Hiding the value unless toggled off for field */
masked?: boolean
masked?: boolean | null
}
export type PackagePropertyObject = {
value: PackagePropertiesV2
@@ -225,17 +242,14 @@ const matchPackagePropertyObject: Parser<unknown, PackagePropertyObject> =
})
const matchPackagePropertyString: Parser<unknown, PackagePropertyString> =
object(
{
type: literal("string"),
description: string,
value: string,
copyable: boolean,
qr: boolean,
masked: boolean,
},
["copyable", "description", "qr", "masked"],
)
object({
type: literal("string"),
description: string.nullable().optional(),
value: string,
copyable: boolean.nullable().optional(),
qr: boolean.nullable().optional(),
masked: boolean.nullable().optional(),
})
setMatchPackageProperties(
dictionary([
string,
@@ -275,6 +289,7 @@ function convertProperties(
const DEFAULT_REGISTRY = "https://registry.start9.com"
export class SystemForEmbassy implements System {
private version: ExtendedVersion
currentRunning: MainLoop | undefined
static async of(manifestLocation: string = MANIFEST_LOCATION) {
const moduleCode = await import(EMBASSY_JS_LOCATION)
@@ -296,11 +311,37 @@ export class SystemForEmbassy implements System {
constructor(
readonly manifest: Manifest,
readonly moduleCode: Partial<U.ExpectedExports>,
) {}
) {
this.version = ExtendedVersion.parseEmver(manifest.version)
if (
this.manifest.id === "bitcoind" &&
this.manifest.title.toLowerCase().includes("knots")
)
this.version.flavor = "knots"
async containerInit(effects: Effects): Promise<void> {
if (
this.manifest.id === "lnd" ||
this.manifest.id === "ride-the-lightning" ||
this.manifest.id === "datum"
) {
this.version.upstream.prerelease = ["beta"]
} else if (
this.manifest.id === "lightning-terminal" ||
this.manifest.id === "robosats"
) {
this.version.upstream.prerelease = ["alpha"]
}
}
async init(
effects: Effects,
kind: "install" | "update" | "restore" | null,
): Promise<void> {
if (kind === "restore") {
await this.restoreBackup(effects, null)
}
for (let depId in this.manifest.dependencies) {
if (this.manifest.dependencies[depId].config) {
if (this.manifest.dependencies[depId]?.config) {
await this.dependenciesAutoconfig(effects, depId, null)
}
}
@@ -308,6 +349,9 @@ export class SystemForEmbassy implements System {
await this.exportActions(effects)
await this.exportNetwork(effects)
await this.containerSetDependencies(effects)
if (kind === "install" || kind === "update") {
await this.packageInit(effects, null)
}
}
async containerSetDependencies(effects: T.Effects) {
const oldDeps: Record<string, string[]> = Object.fromEntries(
@@ -347,16 +391,23 @@ export class SystemForEmbassy implements System {
}
async packageInit(effects: Effects, timeoutMs: number | null): Promise<void> {
const previousVersion = await effects.getDataVersion()
const previousVersion = await getDataVersion(effects)
if (previousVersion) {
if (
(await this.migration(effects, { from: previousVersion }, timeoutMs))
.configured
) {
await effects.action.clearRequests({ only: ["needs-config"] })
const migrationRes = await this.migration(
effects,
{ from: previousVersion },
timeoutMs,
)
if (migrationRes) {
if (migrationRes.configured)
await effects.action.clearTasks({ only: ["needs-config"] })
await configFile.write(
effects,
await this.getConfig(effects, timeoutMs),
)
}
} else if (this.manifest.config) {
await effects.action.request({
await effects.action.createTask({
packageId: this.manifest.id,
actionId: "config",
severity: "critical",
@@ -364,9 +415,11 @@ export class SystemForEmbassy implements System {
reason: "This service must be configured before it can be run",
})
}
await effects.setDataVersion({
version: ExtendedVersion.parseEmver(this.manifest.version).toString(),
version: this.version.toString(),
})
// @FullMetal: package hacks go here
}
async exportNetwork(effects: Effects) {
for (const [id, interfaceValue] of Object.entries(
@@ -441,7 +494,7 @@ export class SystemForEmbassy implements System {
masked: false,
path: "",
schemeOverride: null,
search: {},
query: {},
username: null,
}),
])
@@ -456,13 +509,18 @@ export class SystemForEmbassy implements System {
): Promise<T.ActionInput | null> {
if (actionId === "config") {
const config = await this.getConfig(effects, timeoutMs)
return { spec: config.spec, value: config.config }
return {
eventId: effects.eventId!,
spec: config.spec,
value: config.config,
}
} else if (actionId === "properties") {
return null
} else {
const oldSpec = this.manifest.actions?.[actionId]?.["input-spec"]
if (!oldSpec) return null
return {
eventId: effects.eventId!,
spec: transformConfigSpec(oldSpec as OldConfigSpec),
value: null,
}
@@ -543,14 +601,14 @@ export class SystemForEmbassy implements System {
}
await effects.action.clear({ except: Object.keys(actions) })
}
async packageUninit(
async uninit(
effects: Effects,
nextVersion: Optional<string>,
timeoutMs: number | null,
target: ExtendedVersion | VersionRange | null,
timeoutMs?: number | null,
): Promise<void> {
await this.currentRunning?.clean({ timeout: timeoutMs ?? undefined })
if (nextVersion) {
await this.migration(effects, { to: nextVersion }, timeoutMs)
if (target) {
await this.migration(effects, { to: target }, timeoutMs ?? null)
}
await effects.setMainStatus({ status: "stopped" })
}
@@ -577,11 +635,21 @@ export class SystemForEmbassy implements System {
const moduleCode = await this.moduleCode
await moduleCode.createBackup?.(polyfillEffects(effects, this.manifest))
}
const dataVersion = await effects.getDataVersion()
if (dataVersion)
await fs.writeFile("/media/startos/backup/dataVersion.txt", dataVersion, {
encoding: "utf-8",
})
}
async restoreBackup(
effects: Effects,
timeoutMs: number | null,
): Promise<void> {
const store = await fs
.readFile("/media/startos/backup/store.json", {
encoding: "utf-8",
})
.catch((_) => null)
const restoreBackup = this.manifest.backup.restore
if (restoreBackup.type === "docker") {
const commands = [restoreBackup.entrypoint, ...restoreBackup.args]
@@ -600,6 +668,13 @@ export class SystemForEmbassy implements System {
const moduleCode = await this.moduleCode
await moduleCode.restoreBackup?.(polyfillEffects(effects, this.manifest))
}
const dataVersion = await fs
.readFile("/media/startos/backup/dataVersion.txt", {
encoding: "utf-8",
})
.catch((_) => null)
if (dataVersion) await effects.setDataVersion({ version: dataVersion })
}
async getConfig(effects: Effects, timeoutMs: number | null) {
return this.getConfigUncleaned(effects, timeoutMs).then(convertToNewConfig)
@@ -649,10 +724,7 @@ export class SystemForEmbassy implements System {
structuredClone(newConfigWithoutPointers as Record<string, unknown>),
)
await updateConfig(effects, this.manifest, spec, newConfig)
await effects.store.set({
path: EMBASSY_POINTER_PATH_PREFIX,
value: newConfig,
})
await configFile.write(effects, newConfig)
const setConfigValue = this.manifest.config?.set
if (!setConfigValue) return
if (setConfigValue.type === "docker") {
@@ -706,15 +778,11 @@ export class SystemForEmbassy implements System {
rawDepends: { [x: string]: readonly string[] },
configuring: boolean,
) {
const storedDependsOn = (await effects.store.get({
packageId: this.manifest.id,
path: EMBASSY_DEPENDS_ON_PATH_PREFIX,
})) as Record<string, readonly string[]>
const storedDependsOn = await dependsOnFile.read().once()
const requiredDeps = {
...Object.fromEntries(
Object.entries(this.manifest.dependencies || {})
?.filter((x) => x[1].requirement.type === "required")
Object.entries(this.manifest.dependencies ?? {})
.filter(([k, v]) => v?.requirement.type === "required")
.map((x) => [x[0], []]) || [],
),
}
@@ -728,10 +796,7 @@ export class SystemForEmbassy implements System {
? storedDependsOn
: requiredDeps
await effects.store.set({
path: EMBASSY_DEPENDS_ON_PATH_PREFIX,
value: dependsOn,
})
await dependsOnFile.write(effects, dependsOn)
await effects.setDependencies({
dependencies: Object.entries(dependsOn).flatMap(
@@ -755,31 +820,33 @@ export class SystemForEmbassy implements System {
async migration(
effects: Effects,
version: { from: string } | { to: string },
version:
| { from: VersionRange | ExtendedVersion }
| { to: VersionRange | ExtendedVersion },
timeoutMs: number | null,
): Promise<{ configured: boolean }> {
): Promise<{ configured: boolean } | null> {
let migration
let args: [string, ...string[]]
if ("from" in version) {
args = [version.from, "from"]
const fromExver = ExtendedVersion.parse(version.from)
if (overlaps(this.version, version.from)) return null
args = [version.from.toString(), "from"]
if (!this.manifest.migrations) return { configured: true }
migration = Object.entries(this.manifest.migrations.from)
.map(
([version, procedure]) =>
[VersionRange.parseEmver(version), procedure] as const,
)
.find(([versionEmver, _]) => versionEmver.satisfiedBy(fromExver))
.find(([versionEmver, _]) => overlaps(versionEmver, version.from))
} else {
args = [version.to, "to"]
const toExver = ExtendedVersion.parse(version.to)
if (overlaps(this.version, version.to)) return null
args = [version.to.toString(), "to"]
if (!this.manifest.migrations) return { configured: true }
migration = Object.entries(this.manifest.migrations.to)
.map(
([version, procedure]) =>
[VersionRange.parseEmver(version), procedure] as const,
)
.find(([versionEmver, _]) => versionEmver.satisfiedBy(toExver))
.find(([versionEmver, _]) => overlaps(versionEmver, version.to))
}
if (migration) {
@@ -815,7 +882,7 @@ export class SystemForEmbassy implements System {
})) as any
}
}
return { configured: true }
return null
}
async properties(
effects: Effects,
@@ -969,43 +1036,52 @@ export class SystemForEmbassy implements System {
timeoutMs: number | null,
): Promise<void> {
// TODO: docker
const oldConfig = (await effects.store.get({
packageId: id,
path: EMBASSY_POINTER_PATH_PREFIX,
callback: () => {
this.dependenciesAutoconfig(effects, id, timeoutMs)
},
})) as U.Config
if (!oldConfig) return
const moduleCode = await this.moduleCode
const method = moduleCode?.dependencies?.[id]?.autoConfigure
if (!method) return
const newConfig = (await method(
polyfillEffects(effects, this.manifest),
JSON.parse(JSON.stringify(oldConfig)),
).then((x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})) as any
const diff = partialDiff(oldConfig, newConfig)
if (diff) {
await effects.action.request({
actionId: "config",
await effects.mount({
location: `/media/embassy/${id}`,
target: {
packageId: id,
replayId: `${id}/config`,
severity: "important",
reason: `Configure this dependency for the needs of ${this.manifest.title}`,
input: {
kind: "partial",
value: diff.diff,
},
when: {
condition: "input-not-matches",
once: false,
},
volumeId: "embassy",
subpath: null,
readonly: true,
filetype: "directory",
},
})
configFile
.withPath(`/media/embassy/${id}/config.json`)
.read()
.onChange(effects, async (oldConfig: U.Config) => {
if (!oldConfig) return { cancel: false }
const moduleCode = await this.moduleCode
const method = moduleCode?.dependencies?.[id]?.autoConfigure
if (!method) return { cancel: true }
const newConfig = (await method(
polyfillEffects(effects, this.manifest),
JSON.parse(JSON.stringify(oldConfig)),
).then((x) => {
if ("result" in x) return x.result
if ("error" in x) throw new Error("Error getting config: " + x.error)
throw new Error("Error getting config: " + x["error-code"][1])
})) as any
const diff = partialDiff(oldConfig, newConfig)
if (diff) {
await effects.action.createTask({
actionId: "config",
packageId: id,
replayId: `${id}/config`,
severity: "important",
reason: `Configure this dependency for the needs of ${this.manifest.title}`,
input: {
kind: "partial",
value: diff.diff,
},
when: {
condition: "input-not-matches",
once: false,
},
})
}
return { cancel: false }
})
}
}
}
@@ -1107,11 +1183,21 @@ async function updateConfig(
) {
if (specValue.target === "config") {
const jp = require("jsonpath")
const remoteConfig = await effects.store.get({
packageId: specValue["package-id"],
callback: () => effects.restart(),
path: EMBASSY_POINTER_PATH_PREFIX,
const depId = specValue["package-id"]
await effects.mount({
location: `/media/embassy/${depId}`,
target: {
packageId: depId,
volumeId: "embassy",
subpath: null,
readonly: true,
filetype: "directory",
},
})
const remoteConfig = configFile
.withPath(`/media/embassy/${depId}/config.json`)
.read()
.once()
console.debug(remoteConfig)
const configValue = specValue.multi
? jp.query(remoteConfig, specValue.selector)
@@ -1152,14 +1238,14 @@ async function updateConfig(
const url: string =
filled === null || filled.addressInfo === null
? ""
: catchFn(() =>
utils.hostnameInfoToAddress(
specValue.target === "lan-address"
: catchFn(
() =>
(specValue.target === "lan-address"
? filled.addressInfo!.localHostnames[0] ||
filled.addressInfo!.onionHostnames[0]
filled.addressInfo!.onionHostnames[0]
: filled.addressInfo!.onionHostnames[0] ||
filled.addressInfo!.localHostnames[0],
),
filled.addressInfo!.localHostnames[0]
).hostname.value,
) || ""
mutConfigValue[key] = url
}

View File

@@ -14,123 +14,113 @@ import {
import { matchVolume } from "./matchVolume"
import { matchDockerProcedure } from "../../../Models/DockerProcedure"
const matchJsProcedure = object(
{
type: literal("script"),
args: array(unknown),
},
["args"],
{
args: [],
},
)
const matchJsProcedure = object({
type: literal("script"),
args: array(unknown).nullable().optional().defaultTo([]),
})
const matchProcedure = some(matchDockerProcedure, matchJsProcedure)
export type Procedure = typeof matchProcedure._TYPE
const matchAction = object(
{
name: string,
description: string,
warning: string,
implementation: matchProcedure,
"allowed-statuses": array(literals("running", "stopped")),
"input-spec": unknown,
},
["warning", "input-spec", "input-spec"],
)
export const matchManifest = object(
{
id: string,
title: string,
version: string,
main: matchDockerProcedure,
assets: object(
{
assets: string,
scripts: string,
},
["assets", "scripts"],
const matchAction = object({
name: string,
description: string,
warning: string.nullable().optional(),
implementation: matchProcedure,
"allowed-statuses": array(literals("running", "stopped")),
"input-spec": unknown.nullable().optional(),
})
export const matchManifest = object({
id: string,
title: string,
version: string,
main: matchDockerProcedure,
assets: object({
assets: string.nullable().optional(),
scripts: string.nullable().optional(),
})
.nullable()
.optional(),
"health-checks": dictionary([
string,
every(
matchProcedure,
object({
name: string,
["success-message"]: string.nullable().optional(),
}),
),
"health-checks": dictionary([
string,
every(
matchProcedure,
object(
{
name: string,
["success-message"]: string,
},
["success-message"],
),
),
]),
config: object({
get: matchProcedure,
set: matchProcedure,
]),
config: object({
get: matchProcedure,
set: matchProcedure,
})
.nullable()
.optional(),
properties: matchProcedure.nullable().optional(),
volumes: dictionary([string, matchVolume]),
interfaces: dictionary([
string,
object({
name: string,
description: string,
"tor-config": object({
"port-mapping": dictionary([string, string]),
})
.nullable()
.optional(),
"lan-config": dictionary([
string,
object({
ssl: boolean,
internal: number,
}),
])
.nullable()
.optional(),
ui: boolean,
protocols: array(string),
}),
properties: matchProcedure,
volumes: dictionary([string, matchVolume]),
interfaces: dictionary([
string,
object(
{
name: string,
description: string,
"tor-config": object({
"port-mapping": dictionary([string, string]),
}),
"lan-config": dictionary([
string,
object({
ssl: boolean,
internal: number,
}),
]),
ui: boolean,
protocols: array(string),
},
["lan-config", "tor-config"],
]),
backup: object({
create: matchProcedure,
restore: matchProcedure,
}),
migrations: object({
to: dictionary([string, matchProcedure]),
from: dictionary([string, matchProcedure]),
})
.nullable()
.optional(),
dependencies: dictionary([
string,
object({
version: string,
requirement: some(
object({
type: literal("opt-in"),
how: string,
}),
object({
type: literal("opt-out"),
how: string,
}),
object({
type: literal("required"),
}),
),
]),
backup: object({
create: matchProcedure,
restore: matchProcedure,
}),
migrations: object({
to: dictionary([string, matchProcedure]),
from: dictionary([string, matchProcedure]),
}),
dependencies: dictionary([
string,
object(
{
version: string,
requirement: some(
object({
type: literal("opt-in"),
how: string,
}),
object({
type: literal("opt-out"),
how: string,
}),
object({
type: literal("required"),
}),
),
description: string,
config: object({
check: matchProcedure,
"auto-configure": matchProcedure,
}),
},
["description", "config"],
),
]),
description: string.nullable().optional(),
config: object({
check: matchProcedure,
"auto-configure": matchProcedure,
})
.nullable()
.optional(),
})
.nullable()
.optional(),
]),
actions: dictionary([string, matchAction]),
},
["config", "actions", "properties", "migrations", "dependencies"],
)
actions: dictionary([string, matchAction]),
})
export type Manifest = typeof matchManifest._TYPE

View File

@@ -1,12 +1,9 @@
import { object, literal, string, boolean, some } from "ts-matches"
const matchDataVolume = object(
{
type: literal("data"),
readonly: boolean,
},
["readonly"],
)
const matchDataVolume = object({
type: literal("data"),
readonly: boolean.optional(),
})
const matchAssetVolume = object({
type: literal("assets"),
})

View File

@@ -135,12 +135,9 @@ export const polyfillEffects = (
[input.command, ...(input.args || [])].join(" "),
)
const daemon = promiseSubcontainer.then((subcontainer) =>
daemons.runCommand()(
effects,
subcontainer,
[input.command, ...(input.args || [])],
{},
),
daemons.runCommand()(effects, subcontainer, {
command: [input.command, ...(input.args || [])],
}),
)
return {
wait: () =>
@@ -169,12 +166,12 @@ export const polyfillEffects = (
{ imageId: manifest.main.image },
commands,
{
mounts: Mounts.of().addVolume(
input.volumeId,
null,
"/drive",
false,
),
mounts: Mounts.of().mountVolume({
volumeId: input.volumeId,
subpath: null,
mountpoint: "/drive",
readonly: false,
}),
},
commands.join(" "),
)
@@ -206,12 +203,12 @@ export const polyfillEffects = (
{ imageId: manifest.main.image },
commands,
{
mounts: Mounts.of().addVolume(
input.volumeId,
null,
"/drive",
false,
),
mounts: Mounts.of().mountVolume({
volumeId: input.volumeId,
subpath: null,
mountpoint: "/drive",
readonly: false,
}),
},
commands.join(" "),
)

View File

@@ -1,5 +1,10 @@
import { matchOldConfigSpec, transformConfigSpec } from "./transformConfigSpec"
import fixtureEmbasyPagesConfig from "./__fixtures__/embasyPagesConfig"
import {
matchOldConfigSpec,
matchOldValueSpecList,
transformConfigSpec,
} from "./transformConfigSpec"
import fixtureEmbassyPagesConfig from "./__fixtures__/embassyPagesConfig"
import fixtureRTLConfig from "./__fixtures__/rtlConfig"
import searNXG from "./__fixtures__/searNXG"
import bitcoind from "./__fixtures__/bitcoind"
import nostr from "./__fixtures__/nostr"
@@ -8,14 +13,25 @@ import nostrConfig2 from "./__fixtures__/nostrConfig2"
describe("transformConfigSpec", () => {
test("matchOldConfigSpec(embassyPages.homepage.variants[web-page])", () => {
matchOldConfigSpec.unsafeCast(
fixtureEmbasyPagesConfig.homepage.variants["web-page"],
fixtureEmbassyPagesConfig.homepage.variants["web-page"],
)
})
test("matchOldConfigSpec(embassyPages)", () => {
matchOldConfigSpec.unsafeCast(fixtureEmbasyPagesConfig)
matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig)
})
test("transformConfigSpec(embassyPages)", () => {
const spec = matchOldConfigSpec.unsafeCast(fixtureEmbasyPagesConfig)
const spec = matchOldConfigSpec.unsafeCast(fixtureEmbassyPagesConfig)
expect(transformConfigSpec(spec)).toMatchSnapshot()
})
test("matchOldConfigSpec(RTL.nodes)", () => {
matchOldValueSpecList.unsafeCast(fixtureRTLConfig.nodes)
})
test("matchOldConfigSpec(RTL)", () => {
matchOldConfigSpec.unsafeCast(fixtureRTLConfig)
})
test("transformConfigSpec(RTL)", () => {
const spec = matchOldConfigSpec.unsafeCast(fixtureRTLConfig)
expect(transformConfigSpec(spec)).toMatchSnapshot()
})

View File

@@ -47,6 +47,7 @@ export function transformConfigSpec(oldSpec: OldConfigSpec): IST.InputSpec {
immutable: false,
}
} else if (oldVal.type === "list") {
if (isUnionList(oldVal)) return inputSpec
newVal = getListSpec(oldVal)
} else if (oldVal.type === "number") {
const range = Range.from(oldVal.range)
@@ -177,15 +178,17 @@ export function transformOldConfigToNew(
}
}
if (isList(val) && isObjectList(val)) {
if (isList(val)) {
if (!config[key]) return obj
newVal = (config[key] as object[]).map((obj) =>
transformOldConfigToNew(
matchOldConfigSpec.unsafeCast(val.spec.spec),
obj,
),
)
if (isObjectList(val)) {
newVal = (config[key] as object[]).map((obj) =>
transformOldConfigToNew(
matchOldConfigSpec.unsafeCast(val.spec.spec),
obj,
),
)
} else if (isUnionList(val)) return obj
}
if (isPointer(val)) {
@@ -203,6 +206,7 @@ export function transformNewConfigToOld(
spec: OldConfigSpec,
config: Record<string, any>,
): Record<string, any> {
if (!config) return config
return Object.entries(spec).reduce((obj, [key, val]) => {
let newVal = config[key]
@@ -223,13 +227,15 @@ export function transformNewConfigToOld(
}
}
if (isList(val) && isObjectList(val)) {
newVal = (config[key] as object[]).map((obj) =>
transformNewConfigToOld(
matchOldConfigSpec.unsafeCast(val.spec.spec),
obj,
),
)
if (isList(val)) {
if (isObjectList(val)) {
newVal = (config[key] as object[]).map((obj) =>
transformNewConfigToOld(
matchOldConfigSpec.unsafeCast(val.spec.spec),
obj,
),
)
} else if (isUnionList(val)) return obj
}
return {
@@ -375,15 +381,17 @@ function isNumberList(
): val is OldValueSpecList & { subtype: "number" } {
return val.subtype === "number"
}
function isObjectList(
val: OldValueSpecList,
): val is OldValueSpecList & { subtype: "object" } {
if (["union"].includes(val.subtype)) {
throw new Error("Invalid list subtype. enum, string, and object permitted.")
}
return val.subtype === "object"
}
function isUnionList(
val: OldValueSpecList,
): val is OldValueSpecList & { subtype: "union" } {
return val.subtype === "union"
}
export type OldConfigSpec = Record<string, OldValueSpec>
const [_matchOldConfigSpec, setMatchOldConfigSpec] = deferred<unknown>()
export const matchOldConfigSpec = _matchOldConfigSpec as Parser<
@@ -396,100 +404,71 @@ export const matchOldDefaultString = anyOf(
)
type OldDefaultString = typeof matchOldDefaultString._TYPE
export const matchOldValueSpecString = object(
{
type: literals("string"),
name: string,
masked: boolean,
copyable: boolean,
nullable: boolean,
placeholder: string,
pattern: string,
"pattern-description": string,
default: matchOldDefaultString,
textarea: boolean,
description: string,
warning: string,
},
[
"masked",
"copyable",
"nullable",
"placeholder",
"pattern",
"pattern-description",
"default",
"textarea",
"description",
"warning",
],
)
export const matchOldValueSpecString = object({
type: literals("string"),
name: string,
masked: boolean.nullable().optional(),
copyable: boolean.nullable().optional(),
nullable: boolean.nullable().optional(),
placeholder: string.nullable().optional(),
pattern: string.nullable().optional(),
"pattern-description": string.nullable().optional(),
default: matchOldDefaultString.nullable().optional(),
textarea: boolean.nullable().optional(),
description: string.nullable().optional(),
warning: string.nullable().optional(),
})
export const matchOldValueSpecNumber = object(
{
type: literals("number"),
nullable: boolean,
name: string,
range: string,
integral: boolean,
default: number,
description: string,
warning: string,
units: string,
placeholder: anyOf(number, string),
},
["default", "description", "warning", "units", "placeholder"],
)
export const matchOldValueSpecNumber = object({
type: literals("number"),
nullable: boolean,
name: string,
range: string,
integral: boolean,
default: number.nullable().optional(),
description: string.nullable().optional(),
warning: string.nullable().optional(),
units: string.nullable().optional(),
placeholder: anyOf(number, string).nullable().optional(),
})
type OldValueSpecNumber = typeof matchOldValueSpecNumber._TYPE
export const matchOldValueSpecBoolean = object(
{
type: literals("boolean"),
default: boolean,
name: string,
description: string,
warning: string,
},
["description", "warning"],
)
export const matchOldValueSpecBoolean = object({
type: literals("boolean"),
default: boolean,
name: string,
description: string.nullable().optional(),
warning: string.nullable().optional(),
})
type OldValueSpecBoolean = typeof matchOldValueSpecBoolean._TYPE
const matchOldValueSpecObject = object(
{
type: literals("object"),
spec: _matchOldConfigSpec,
name: string,
description: string,
warning: string,
},
["description", "warning"],
)
const matchOldValueSpecObject = object({
type: literals("object"),
spec: _matchOldConfigSpec,
name: string,
description: string.nullable().optional(),
warning: string.nullable().optional(),
})
type OldValueSpecObject = typeof matchOldValueSpecObject._TYPE
const matchOldValueSpecEnum = object(
{
values: array(string),
"value-names": dictionary([string, string]),
type: literals("enum"),
default: string,
name: string,
description: string,
warning: string,
},
["description", "warning"],
)
const matchOldValueSpecEnum = object({
values: array(string),
"value-names": dictionary([string, string]),
type: literals("enum"),
default: string,
name: string,
description: string.nullable().optional(),
warning: string.nullable().optional(),
})
type OldValueSpecEnum = typeof matchOldValueSpecEnum._TYPE
const matchOldUnionTagSpec = object(
{
id: string, // The name of the field containing one of the union variants
"variant-names": dictionary([string, string]), // The name of each variant
name: string,
description: string,
warning: string,
},
["description", "warning"],
)
const matchOldUnionTagSpec = object({
id: string, // The name of the field containing one of the union variants
"variant-names": dictionary([string, string]), // The name of each variant
name: string,
description: string.nullable().optional(),
warning: string.nullable().optional(),
})
const matchOldValueSpecUnion = object({
type: literals("union"),
tag: matchOldUnionTagSpec,
@@ -514,57 +493,51 @@ setOldUniqueBy(
),
)
const matchOldListValueSpecObject = object(
{
spec: _matchOldConfigSpec, // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
"unique-by": matchOldUniqueBy, // indicates whether duplicates can be permitted in the list
"display-as": string, // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
},
["display-as", "unique-by"],
)
const matchOldListValueSpecString = object(
{
masked: boolean,
copyable: boolean,
pattern: string,
"pattern-description": string,
placeholder: string,
},
["pattern", "pattern-description", "placeholder", "copyable", "masked"],
)
const matchOldListValueSpecObject = object({
spec: _matchOldConfigSpec, // this is a mapped type of the config object at this level, replacing the object's values with specs on those values
"unique-by": matchOldUniqueBy.nullable().optional(), // indicates whether duplicates can be permitted in the list
"display-as": string.nullable().optional(), // this should be a handlebars template which can make use of the entire config which corresponds to 'spec'
})
const matchOldListValueSpecUnion = object({
"unique-by": matchOldUniqueBy.nullable().optional(),
"display-as": string.nullable().optional(),
tag: matchOldUnionTagSpec,
variants: dictionary([string, _matchOldConfigSpec]),
})
const matchOldListValueSpecString = object({
masked: boolean.nullable().optional(),
copyable: boolean.nullable().optional(),
pattern: string.nullable().optional(),
"pattern-description": string.nullable().optional(),
placeholder: string.nullable().optional(),
})
const matchOldListValueSpecEnum = object({
values: array(string),
"value-names": dictionary([string, string]),
})
const matchOldListValueSpecNumber = object(
{
range: string,
integral: boolean,
units: string,
placeholder: anyOf(number, string),
},
["units", "placeholder"],
)
const matchOldListValueSpecNumber = object({
range: string,
integral: boolean,
units: string.nullable().optional(),
placeholder: anyOf(number, string).nullable().optional(),
})
// represents a spec for a list
const matchOldValueSpecList = every(
object(
{
type: literals("list"),
range: string, // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules
default: anyOf(
array(string),
array(number),
array(matchOldDefaultString),
array(object),
),
name: string,
description: string,
warning: string,
},
["description", "warning"],
),
export const matchOldValueSpecList = every(
object({
type: literals("list"),
range: string, // '[0,1]' (inclusive) OR '[0,*)' (right unbounded), normal math rules
default: anyOf(
array(string),
array(number),
array(matchOldDefaultString),
array(object),
),
name: string,
description: string.nullable().optional(),
warning: string.nullable().optional(),
}),
anyOf(
object({
subtype: literals("string"),
@@ -582,6 +555,10 @@ const matchOldValueSpecList = every(
subtype: literals("number"),
spec: matchOldListValueSpecNumber,
}),
object({
subtype: literals("union"),
spec: matchOldListValueSpecUnion,
}),
),
)
type OldValueSpecList = typeof matchOldValueSpecList._TYPE

View File

@@ -1,7 +1,6 @@
import { System } from "../../Interfaces/System"
import { Effects } from "../../Models/Effects"
import { T, utils } from "@start9labs/start-sdk"
import { Optional } from "ts-matches/lib/parsers/interfaces"
import { ExtendedVersion, T, utils, VersionRange } from "@start9labs/start-sdk"
export const STARTOS_JS_LOCATION = "/usr/lib/startos/package/index.js"
@@ -11,6 +10,7 @@ type RunningMain = {
export class SystemForStartOs implements System {
private runningMain: RunningMain | undefined
private starting: boolean = false
static of() {
return new SystemForStartOs(require(STARTOS_JS_LOCATION))
@@ -19,22 +19,23 @@ export class SystemForStartOs implements System {
constructor(readonly abi: T.ABI) {
this
}
async containerInit(effects: Effects): Promise<void> {
return void (await this.abi.containerInit({ effects }))
}
async packageInit(
async init(
effects: Effects,
kind: "install" | "update" | "restore" | null,
): Promise<void> {
return void (await this.abi.init({ effects, kind }))
}
async exit(
effects: Effects,
target: ExtendedVersion | VersionRange | null,
timeoutMs: number | null = null,
): Promise<void> {
return void (await this.abi.packageInit({ effects }))
}
async packageUninit(
effects: Effects,
nextVersion: Optional<string> = null,
timeoutMs: number | null = null,
): Promise<void> {
return void (await this.abi.packageUninit({ effects, nextVersion }))
await this.stop()
return void (await this.abi.uninit({ effects, target }))
}
async createBackup(
effects: T.Effects,
timeoutMs: number | null,
@@ -43,14 +44,6 @@ export class SystemForStartOs implements System {
effects,
}))
}
async restoreBackup(
effects: T.Effects,
timeoutMs: number | null,
): Promise<void> {
return void (await this.abi.restoreBackup({
effects,
}))
}
getActionInput(
effects: Effects,
id: string,
@@ -71,28 +64,31 @@ export class SystemForStartOs implements System {
return action.run({ effects, input })
}
async exit(): Promise<void> {}
async start(effects: Effects): Promise<void> {
if (this.runningMain) return
effects.constRetry = utils.once(() => effects.restart())
let mainOnTerm: () => Promise<void> | undefined
const started = async (onTerm: () => Promise<void>) => {
await effects.setMainStatus({ status: "running" })
mainOnTerm = onTerm
return null
}
const daemons = await (
await this.abi.main({
effects,
started,
})
).build()
this.runningMain = {
stop: async () => {
if (mainOnTerm) await mainOnTerm()
await daemons.term()
},
try {
if (this.runningMain || this.starting) return
this.starting = true
effects.constRetry = utils.once(() => effects.restart())
let mainOnTerm: () => Promise<void> | undefined
const started = async (onTerm: () => Promise<void>) => {
await effects.setMainStatus({ status: "running" })
mainOnTerm = onTerm
return null
}
const daemons = await (
await this.abi.main({
effects,
started,
})
).build()
this.runningMain = {
stop: async () => {
if (mainOnTerm) await mainOnTerm()
await daemons.term()
},
}
} finally {
this.starting = false
}
}

View File

@@ -1,13 +1,13 @@
import { types as T } from "@start9labs/start-sdk"
import {
ExtendedVersion,
types as T,
VersionRange,
} from "@start9labs/start-sdk"
import { Effects } from "../Models/Effects"
import { CallbackHolder } from "../Models/CallbackHolder"
import { Optional } from "ts-matches/lib/parsers/interfaces"
export type Procedure =
| "/packageInit"
| "/packageUninit"
| "/backup/create"
| "/backup/restore"
| `/actions/${string}/getInput`
| `/actions/${string}/run`
@@ -15,20 +15,15 @@ export type ExecuteResult =
| { ok: unknown }
| { err: { code: number; message: string } }
export type System = {
containerInit(effects: T.Effects): Promise<void>
init(
effects: T.Effects,
kind: "install" | "update" | "restore" | null,
): Promise<void>
start(effects: T.Effects): Promise<void>
stop(): Promise<void>
packageInit(effects: Effects, timeoutMs: number | null): Promise<void>
packageUninit(
effects: Effects,
nextVersion: Optional<string>,
timeoutMs: number | null,
): Promise<void>
createBackup(effects: T.Effects, timeoutMs: number | null): Promise<void>
restoreBackup(effects: T.Effects, timeoutMs: number | null): Promise<void>
runAction(
effects: Effects,
actionId: string,
@@ -41,7 +36,10 @@ export type System = {
timeoutMs: number | null,
): Promise<T.ActionInput | null>
exit(): Promise<void>
exit(
effects: Effects,
target: ExtendedVersion | VersionRange | null,
): Promise<void>
}
export type RunningMain = {

View File

@@ -14,7 +14,8 @@ export class CallbackHolder {
constructor(private effects?: T.Effects) {}
private callbacks = new Map<number, Function>()
private children: WeakRef<CallbackHolder>[] = []
private onLeaveContextCallbacks: Function[] = []
private children: Map<string, CallbackHolder> = new Map()
private newId() {
return CallbackIdCell.inc++
}
@@ -32,23 +33,30 @@ export class CallbackHolder {
})
return id
}
child(): CallbackHolder {
const child = new CallbackHolder()
this.children.push(new WeakRef(child))
child(name: string): CallbackHolder {
this.removeChild(name)
const child = new CallbackHolder(this.effects)
this.children.set(name, child)
return child
}
removeChild(child: CallbackHolder) {
this.children = this.children.filter((c) => {
const ref = c.deref()
return ref && ref !== child
})
getChild(name: string): CallbackHolder | null {
return this.children.get(name) || null
}
removeChild(name: string) {
const child = this.children.get(name)
if (child) {
child.leaveContext()
this.children.delete(name)
}
}
private getCallback(index: number): Function | undefined {
let callback = this.callbacks.get(index)
if (callback) this.callbacks.delete(index)
else {
for (let i = 0; i < this.children.length; i++) {
callback = this.children[i].deref()?.getCallback(index)
for (let [_, child] of this.children) {
callback = child.getCallback(index)
if (callback) return callback
}
}
@@ -57,6 +65,25 @@ export class CallbackHolder {
callCallback(index: number, args: any[]): Promise<unknown> {
const callback = this.getCallback(index)
if (!callback) return Promise.resolve()
return Promise.resolve().then(() => callback(...args))
return Promise.resolve()
.then(() => callback(...args))
.catch((e) => console.error("callback failed", e))
}
onLeaveContext(fn: Function) {
this.onLeaveContextCallbacks.push(fn)
}
leaveContext() {
for (let [_, child] of this.children) {
child.leaveContext()
}
this.children = new Map()
for (let fn of this.onLeaveContextCallbacks) {
try {
fn()
} catch (e) {
console.warn(e)
}
}
this.onLeaveContextCallbacks = []
}
}

View File

@@ -17,31 +17,25 @@ const Path = string
export type VolumeId = string
export type Path = string
export const matchDockerProcedure = object(
{
type: literal("docker"),
image: string,
system: boolean,
entrypoint: string,
args: array(string),
mounts: dictionary([VolumeId, Path]),
"io-format": literals(
"json",
"json-pretty",
"yaml",
"cbor",
"toml",
"toml-pretty",
),
"sigterm-timeout": some(number, matchDuration),
inject: boolean,
},
["io-format", "sigterm-timeout", "system", "args", "inject", "mounts"],
{
"sigterm-timeout": 30,
inject: false,
args: [],
},
)
export const matchDockerProcedure = object({
type: literal("docker"),
image: string,
system: boolean.optional(),
entrypoint: string,
args: array(string).defaultTo([]),
mounts: dictionary([VolumeId, Path]).optional(),
"io-format": literals(
"json",
"json-pretty",
"yaml",
"cbor",
"toml",
"toml-pretty",
)
.nullable()
.optional(),
"sigterm-timeout": some(number, matchDuration).onMismatch(30),
inject: boolean.defaultTo(false),
})
export type DockerProcedure = typeof matchDockerProcedure._TYPE

View File

@@ -4,8 +4,13 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
if mountpoint tmp/combined; then sudo umount -R tmp/combined; fi
if mountpoint tmp/lower; then sudo umount tmp/lower; fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
if mountpoint -q tmp/combined; then sudo umount -R tmp/combined; fi
if mountpoint -q tmp/lower; then sudo umount tmp/lower; fi
sudo rm -rf tmp
mkdir -p tmp/lower tmp/upper tmp/work tmp/combined
if which squashfuse > /dev/null; then
@@ -13,7 +18,11 @@ if which squashfuse > /dev/null; then
else
sudo mount debian.${ARCH}.squashfs tmp/lower
fi
sudo mount -t overlay -olowerdir=tmp/lower,upperdir=tmp/upper,workdir=tmp/work overlay tmp/combined
if which fuse-overlayfs > /dev/null; then
sudo fuse-overlayfs -olowerdir=tmp/lower,upperdir=tmp/upper,workdir=tmp/work overlay tmp/combined
else
sudo mount -t overlay -olowerdir=tmp/lower,upperdir=tmp/upper,workdir=tmp/work overlay tmp/combined
fi
QEMU=
if [ "$ARCH" != "$(uname -m)" ]; then
@@ -33,8 +42,12 @@ sudo rsync -a --copy-unsafe-links dist/ tmp/combined/usr/lib/startos/init/
sudo chown -R 0:0 tmp/combined/usr/lib/startos/
sudo cp container-runtime.service tmp/combined/lib/systemd/system/container-runtime.service
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime.service
sudo cp ../core/target/$ARCH-unknown-linux-musl/release/containerbox tmp/combined/usr/bin/start-cli
sudo chown 0:0 tmp/combined/usr/bin/start-cli
sudo cp container-runtime-failure.service tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo chown 0:0 tmp/combined/lib/systemd/system/container-runtime-failure.service
sudo cp ../core/target/${RUST_ARCH}-unknown-linux-musl/release/containerbox tmp/combined/usr/bin/start-container
echo -e '#!/bin/bash\nexec start-container "$@"' | sudo tee tmp/combined/usr/bin/start-cli # TODO: remove
sudo chmod +x tmp/combined/usr/bin/start-cli
sudo chown 0:0 tmp/combined/usr/bin/start-container
echo container-runtime | sha256sum | head -c 32 | cat - <(echo) | sudo tee tmp/combined/etc/machine-id
cat deb-install.sh | sudo systemd-nspawn --console=pipe -D tmp/combined $QEMU /bin/bash
sudo truncate -s 0 tmp/combined/etc/machine-id

7200
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

5
core/Cross.toml Normal file
View File

@@ -0,0 +1,5 @@
[build]
pre-build = ["apt-get update && apt-get install -y rsync"]
[build.env]
passthrough = ["RUST_BACKTRACE"]

View File

@@ -14,7 +14,7 @@
## Artifacts
The StartOS backend is packed into a single binary `startbox` that is symlinked under
several different names for different behaviour:
several different names for different behavior:
- `startd`: This is the main daemon of StartOS
- `start-cli`: This is a CLI tool that will allow you to issue commands to

View File

@@ -5,50 +5,52 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
fi
if [ -z "${ARCH:-}" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
if [ -z "$KERNEL_NAME" ]; then
KERNEL_NAME=$(uname -s)
if [ -z "${KERNEL_NAME:-}" ]; then
KERNEL_NAME=$(uname -s)
fi
if [ -z "$TARGET" ]; then
if [ "$KERNEL_NAME" = "Linux" ]; then
TARGET="$ARCH-unknown-linux-musl"
elif [ "$KERNEL_NAME" = "Darwin" ]; then
TARGET="$ARCH-apple-darwin"
else
>&2 echo "unknown kernel $KERNEL_NAME"
exit 1
fi
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
if [ -z "${TARGET:-}" ]; then
if [ "$KERNEL_NAME" = "Linux" ]; then
TARGET="$ARCH-unknown-linux-musl"
elif [ "$KERNEL_NAME" = "Darwin" ]; then
TARGET="$ARCH-apple-darwin"
else
>&2 echo "unknown kernel $KERNEL_NAME"
exit 1
fi
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
# Ensure GIT_HASH.txt exists if not created by higher-level build steps
if [ ! -f GIT_HASH.txt ] && command -v git >/dev/null 2>&1; then
git rev-parse HEAD > GIT_HASH.txt || true
fi
if which zig > /dev/null && [ "$ENFORCE_USE_DOCKER" != 1 ]; do
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
RUSTFLAGS=$RUSTFLAGS sh -c "cd core && cargo zigbuild --release --no-default-features --features cli,$FEATURES --locked --bin start-cli --target=$TARGET"
else
alias 'rust-zig-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/cargo-zigbuild'
RUSTFLAGS=$RUSTFLAGS rust-zig-builder sh -c "cd core && cargo zigbuild --release --no-default-features --features cli,$FEATURES --locked --bin start-cli --target=$TARGET"
FEATURES="$(echo "${ENVIRONMENT:-}" | sed 's/-/,/g')"
FEATURE_ARGS="cli"
if [ -n "$FEATURES" ]; then
FEATURE_ARGS="$FEATURE_ARGS,$FEATURES"
fi
if [ "$(ls -nd core/target/$TARGET/release/start-cli | awk '{ print $3 }')" != "$UID" ]; then
rust-zig-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
fi
RUSTFLAGS=""
if [[ "${ENVIRONMENT:-}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features $FEATURE_ARGS --locked --bin start-cli --target=$TARGET

View File

@@ -5,6 +5,11 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
@@ -13,24 +18,19 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features container-runtime,$FEATURES --locked --bin containerbox --target=$ARCH-unknown-linux-musl"
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/containerbox | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-container,$FEATURES --locked --bin containerbox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -5,6 +5,11 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
@@ -13,24 +18,19 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features cli,registry,$FEATURES --locked --bin registrybox --target=$ARCH-unknown-linux-musl"
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/registrybox | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-registry,registry,$FEATURES --locked --bin registrybox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -5,6 +5,11 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
@@ -13,24 +18,19 @@ if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo build --release --no-default-features --features cli,daemon,$FEATURES --locked --bin startbox --target=$ARCH-unknown-linux-musl"
if [ "$(ls -nd core/target/$ARCH-unknown-linux-musl/release/startbox | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli,startd,$FEATURES --locked --bin startbox --target=$RUST_ARCH-unknown-linux-musl

View File

@@ -5,32 +5,45 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
ARCH="aarch64"
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
rm -rf core/startos/bindings/
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "cd core && cargo test --release --features=test,$FEATURES 'export_bindings_' && chown \$UID:\$UID startos/bindings"
if [ "$(ls -nd core/startos/bindings | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID startos/bindings && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
cross test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features test,$FEATURES --locked 'export_bindings_'
cd core/startos/bindings
for folder in $(find . -type d); do
(
cd $folder
find . -name '*.ts' -maxdepth 1 | sed 's/\.\/\([^.]*\)\.ts/export * from ".\/\1";/g' | grep -v '"./index"' | tee ./index.ts
find . -mindepth 1 -maxdepth 1 -type d | sed 's/\.\/\([^.]*\)/export * as \1 from ".\/\1";/g' | grep -v '"./index"' | tee -a ./index.ts
)
done
find . -name '*.ts' | xargs -P $(nproc) -I % npm --prefix ../../../sdk exec -- prettier --config ../../../sdk/base/package.json -w %

36
core/build-tunnelbox.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/bin/bash
cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
if [ "$ARCH" = "arm64" ]; then
ARCH="aarch64"
fi
RUST_ARCH="$ARCH"
if [ "$ARCH" = "riscv64" ]; then
RUST_ARCH="riscv64gc"
fi
cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
cross build --manifest-path=./core/Cargo.toml $BUILD_FLAGS --no-default-features --features cli-tunnel,tunnel,$FEATURES --locked --bin tunnelbox --target=$RUST_ARCH-unknown-linux-musl

3
core/builder-alias.sh Normal file
View File

@@ -0,0 +1,3 @@
#!/bin/bash
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -e SCCACHE_GHA_ENABLED -e SCCACHE_GHA_VERSION -e ACTIONS_RESULTS_URL -e ACTIONS_RUNTIME_TOKEN -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$HOME/.cache/sccache":/root/.cache/sccache -v "$(pwd)":/home/rust/src -w /home/rust/src -P start9/rust-musl-cross:$ARCH-musl'

View File

@@ -0,0 +1,82 @@
use std::sync::Arc;
use color_eyre::Report;
use models::InterfaceId;
use models::PackageId;
use serde_json::Value;
use tokio::sync::mpsc;
pub struct RuntimeDropped;
pub struct Callback {
id: Arc<String>,
sender: mpsc::UnboundedSender<(Arc<String>, Vec<Value>)>,
}
impl Callback {
pub fn new(id: String, sender: mpsc::UnboundedSender<(Arc<String>, Vec<Value>)>) -> Self {
Self {
id: Arc::new(id),
sender,
}
}
pub fn is_listening(&self) -> bool {
self.sender.is_closed()
}
pub fn call(&self, args: Vec<Value>) -> Result<(), RuntimeDropped> {
self.sender
.send((self.id.clone(), args))
.map_err(|_| RuntimeDropped)
}
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct AddressSchemaOnion {
pub id: InterfaceId,
pub external_port: u16,
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct AddressSchemaLocal {
pub id: InterfaceId,
pub external_port: u16,
}
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Address(pub String);
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Domain;
#[derive(serde::Deserialize, serde::Serialize, Debug, Clone)]
#[serde(rename_all = "camelCase")]
pub struct Name;
#[async_trait::async_trait]
#[allow(unused_variables)]
pub trait OsApi: Send + Sync + 'static {
async fn get_service_config(
&self,
id: PackageId,
path: &str,
callback: Option<Callback>,
) -> Result<Vec<Value>, Report>;
async fn bind_local(
&self,
internal_port: u16,
address_schema: AddressSchemaLocal,
) -> Result<Address, Report>;
async fn bind_onion(
&self,
internal_port: u16,
address_schema: AddressSchemaOnion,
) -> Result<Address, Report>;
async fn unbind_local(&self, id: InterfaceId, external: u16) -> Result<(), Report>;
async fn unbind_onion(&self, id: InterfaceId, external: u16) -> Result<(), Report>;
fn set_started(&self) -> Result<(), Report>;
async fn restart(&self) -> Result<(), Report>;
async fn start(&self) -> Result<(), Report>;
async fn stop(&self) -> Result<(), Report>;
}

View File

@@ -16,4 +16,4 @@ if [ "$PLATFORM" = "arm64" ]; then
PLATFORM="aarch64"
fi
cargo install --path=./startos --no-default-features --features=cli,docker,registry --bin start-cli --locked
cargo install --path=./startos --no-default-features --features=cli,docker --bin start-cli --locked

View File

@@ -1,43 +1,46 @@
[package]
edition = "2021"
name = "models"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features]
arti = ["arti-client"]
[dependencies]
axum = "0.7.5"
base64 = "0.21.4"
arti-client = { version = "0.33", default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
axum = "0.8.4"
base64 = "0.22.1"
color-eyre = "0.6.2"
ed25519-dalek = { version = "2.0.0", features = ["serde"] }
lazy_static = "1.4"
mbrman = "0.5.2"
exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [
"serde",
] }
gpt = "4.1.0"
ipnet = "2.8.0"
lazy_static = "1.4"
lettre = { version = "0.11", default-features = false }
mbrman = "0.6.0"
miette = "7.6.0"
num_enum = "0.7.1"
openssl = { version = "0.10.57", features = ["vendored"] }
patch-db = { version = "*", path = "../../patch-db/patch-db", features = [
"trace",
] }
rand = "0.8.5"
rand = "0.9.1"
regex = "1.10.2"
reqwest = "0.12"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" }
rustls = "0.23"
serde = { version = "1.0", features = ["derive", "rc"] }
serde_json = "1.0"
sqlx = { version = "0.7.2", features = [
"chrono",
"runtime-tokio-rustls",
"postgres",
] }
ssh-key = "0.6.2"
ts-rs = { git = "https://github.com/dr-bonez/ts-rs.git", branch = "feature/top-level-as" } # "8"
thiserror = "1.0"
thiserror = "2.0"
tokio = { version = "1", features = ["full"] }
torut = { git = "https://github.com/Start9Labs/torut.git", branch = "update/dependencies" }
torut = "0.2.1"
tracing = "0.1.39"
yasi = "0.1.5"
ts-rs = "9"
typeid = "1"
yasi = { version = "0.1.6", features = ["serde", "ts-rs"] }
zbus = "5"

View File

@@ -1,5 +1,6 @@
use std::borrow::Cow;
use std::path::Path;
use std::str::FromStr;
use base64::Engine;
use color_eyre::eyre::eyre;
@@ -14,28 +15,26 @@ use crate::{mime, Error, ErrorKind, ResultExt};
#[derive(Clone, TS)]
#[ts(type = "string")]
pub struct DataUrl<'a> {
mime: InternedString,
data: Cow<'a, [u8]>,
pub mime: InternedString,
pub data: Cow<'a, [u8]>,
}
impl<'a> DataUrl<'a> {
pub const DEFAULT_MIME: &'static str = "application/octet-stream";
pub const MAX_SIZE: u64 = 100 * 1024;
// data:{mime};base64,{data}
pub fn to_string(&self) -> String {
fn to_string(&self) -> String {
use std::fmt::Write;
let mut res = String::with_capacity(self.data_url_len_without_mime() + self.mime.len());
let _ = write!(res, "data:{};base64,", self.mime);
base64::engine::general_purpose::STANDARD.encode_string(&self.data, &mut res);
let mut res = String::with_capacity(self.len());
write!(&mut res, "{self}").unwrap();
res
}
fn data_url_len_without_mime(&self) -> usize {
fn len_without_mime(&self) -> usize {
5 + 8 + (4 * self.data.len() / 3) + 3
}
pub fn data_url_len(&self) -> usize {
self.data_url_len_without_mime() + self.mime.len()
pub fn len(&self) -> usize {
self.len_without_mime() + self.mime.len()
}
pub fn from_slice(mime: &str, data: &'a [u8]) -> Self {
@@ -44,6 +43,10 @@ impl<'a> DataUrl<'a> {
data: Cow::Borrowed(data),
}
}
pub fn canonical_ext(&self) -> Option<&'static str> {
mime::unmime(&self.mime)
}
}
impl DataUrl<'static> {
pub async fn from_reader(
@@ -109,12 +112,57 @@ impl DataUrl<'static> {
}
}
impl<'a> std::fmt::Display for DataUrl<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"data:{};base64,{}",
self.mime,
base64::display::Base64Display::new(
&*self.data,
&base64::engine::general_purpose::STANDARD
)
)
}
}
impl<'a> std::fmt::Debug for DataUrl<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_str(&self.to_string())
std::fmt::Display::fmt(self, f)
}
}
#[derive(Debug)]
pub struct DataUrlParseError;
impl std::fmt::Display for DataUrlParseError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "invalid base64 url")
}
}
impl std::error::Error for DataUrlParseError {}
impl From<DataUrlParseError> for Error {
fn from(e: DataUrlParseError) -> Self {
Error::new(e, ErrorKind::ParseUrl)
}
}
impl FromStr for DataUrl<'static> {
type Err = DataUrlParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
s.strip_prefix("data:")
.and_then(|v| v.split_once(";base64,"))
.and_then(|(mime, data)| {
Some(DataUrl {
mime: InternedString::intern(mime),
data: Cow::Owned(
base64::engine::general_purpose::STANDARD
.decode(data)
.ok()?,
),
})
})
.ok_or(DataUrlParseError)
}
}
impl<'de> Deserialize<'de> for DataUrl<'static> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
@@ -130,21 +178,9 @@ impl<'de> Deserialize<'de> for DataUrl<'static> {
where
E: serde::de::Error,
{
v.strip_prefix("data:")
.and_then(|v| v.split_once(";base64,"))
.and_then(|(mime, data)| {
Some(DataUrl {
mime: InternedString::intern(mime),
data: Cow::Owned(
base64::engine::general_purpose::STANDARD
.decode(data)
.ok()?,
),
})
})
.ok_or_else(|| {
E::invalid_value(serde::de::Unexpected::Str(v), &"a valid base64 data url")
})
v.parse().map_err(|_| {
E::invalid_value(serde::de::Unexpected::Str(v), &"a valid base64 data url")
})
}
}
deserializer.deserialize_any(Visitor)
@@ -168,6 +204,6 @@ fn doesnt_reallocate() {
mime: InternedString::intern("png"),
data: Cow::Borrowed(&random[..i]),
};
assert_eq!(icon.to_string().capacity(), icon.data_url_len());
assert_eq!(icon.to_string().capacity(), icon.len());
}
}

View File

@@ -10,6 +10,7 @@ use rpc_toolkit::yajrc::{
RpcError, INVALID_PARAMS_ERROR, INVALID_REQUEST_ERROR, METHOD_NOT_FOUND_ERROR, PARSE_ERROR,
};
use serde::{Deserialize, Serialize};
use tokio::task::JoinHandle;
use crate::InvalidId;
@@ -91,6 +92,9 @@ pub enum ErrorKind {
Cancelled = 73,
Git = 74,
DBus = 75,
InstallFailed = 76,
UpdateFailed = 77,
Smtp = 78,
}
impl ErrorKind {
pub fn as_str(&self) -> &'static str {
@@ -171,6 +175,9 @@ impl ErrorKind {
Cancelled => "Cancelled",
Git => "Git Error",
DBus => "DBus Error",
InstallFailed => "Install Failed",
UpdateFailed => "Update Failed",
Smtp => "SMTP Error",
}
}
}
@@ -180,37 +187,64 @@ impl Display for ErrorKind {
}
}
#[derive(Debug)]
pub struct Error {
pub source: color_eyre::eyre::Error,
pub debug: Option<color_eyre::eyre::Error>,
pub kind: ErrorKind,
pub revision: Option<Revision>,
pub task: Option<JoinHandle<()>>,
}
impl Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}: {}", self.kind.as_str(), self.source)
write!(f, "{}: {:#}", self.kind.as_str(), self.source)
}
}
impl Debug for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}: {:?}",
self.kind.as_str(),
self.debug.as_ref().unwrap_or(&self.source)
)
}
}
impl Error {
pub fn new<E: Into<color_eyre::eyre::Error>>(source: E, kind: ErrorKind) -> Self {
pub fn new<E: Into<color_eyre::eyre::Error> + std::fmt::Debug + 'static>(
source: E,
kind: ErrorKind,
) -> Self {
let debug = (typeid::of::<E>() == typeid::of::<color_eyre::eyre::Error>())
.then(|| eyre!("{source:?}"));
Error {
source: source.into(),
debug,
kind,
revision: None,
task: None,
}
}
pub fn clone_output(&self) -> Self {
Error {
source: ErrorData {
details: format!("{}", self.source),
debug: format!("{:?}", self.source),
}
.into(),
source: eyre!("{}", self.source),
debug: self.debug.as_ref().map(|e| eyre!("{e}")),
kind: self.kind,
revision: self.revision.clone(),
task: None,
}
}
pub fn with_task(mut self, task: JoinHandle<()>) -> Self {
self.task = Some(task);
self
}
pub async fn wait(mut self) -> Self {
if let Some(task) = &mut self.task {
task.await.log_err();
}
self.task.take();
self
}
}
impl axum::response::IntoResponse for Error {
fn into_response(self) -> axum::response::Response {
@@ -269,11 +303,6 @@ impl From<patch_db::Error> for Error {
Error::new(e, ErrorKind::Database)
}
}
impl From<sqlx::Error> for Error {
fn from(e: sqlx::Error) -> Self {
Error::new(e, ErrorKind::Database)
}
}
impl From<ed25519_dalek::SignatureError> for Error {
fn from(e: ed25519_dalek::SignatureError) -> Self {
Error::new(e, ErrorKind::InvalidSignature)
@@ -284,11 +313,6 @@ impl From<std::net::AddrParseError> for Error {
Error::new(e, ErrorKind::ParseNetAddress)
}
}
impl From<torut::control::ConnError> for Error {
fn from(e: torut::control::ConnError) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<ipnet::AddrParseError> for Error {
fn from(e: ipnet::AddrParseError) -> Self {
Error::new(e, ErrorKind::ParseNetAddress)
@@ -304,6 +328,16 @@ impl From<mbrman::Error> for Error {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<gpt::GptError> for Error {
fn from(e: gpt::GptError) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<gpt::mbr::MBRError> for Error {
fn from(e: gpt::mbr::MBRError) -> Self {
Error::new(e, ErrorKind::DiskManagement)
}
}
impl From<InvalidUri> for Error {
fn from(e: InvalidUri) -> Self {
Error::new(eyre!("{}", e), ErrorKind::ParseUrl)
@@ -324,8 +358,14 @@ impl From<reqwest::Error> for Error {
Error::new(e, kind)
}
}
impl From<torut::onion::OnionAddressParseError> for Error {
fn from(e: torut::onion::OnionAddressParseError) -> Self {
#[cfg(feature = "arti")]
impl From<arti_client::Error> for Error {
fn from(e: arti_client::Error) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
impl From<torut::control::ConnError> for Error {
fn from(e: torut::control::ConnError) -> Self {
Error::new(e, ErrorKind::Tor)
}
}
@@ -339,6 +379,21 @@ impl From<rustls::Error> for Error {
Error::new(e, ErrorKind::OpenSsl)
}
}
impl From<lettre::error::Error> for Error {
fn from(e: lettre::error::Error) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<lettre::transport::smtp::Error> for Error {
fn from(e: lettre::transport::smtp::Error) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<lettre::address::AddressError> for Error {
fn from(e: lettre::address::AddressError) -> Self {
Error::new(e, ErrorKind::Smtp)
}
}
impl From<patch_db::value::Error> for Error {
fn from(value: patch_db::value::Error) -> Self {
match value.kind {
@@ -520,25 +575,26 @@ where
impl<T, E> ResultExt<T, E> for Result<T, E>
where
color_eyre::eyre::Error: From<E>,
E: std::fmt::Debug + 'static,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error {
source: e.into(),
kind,
revision: None,
})
self.map_err(|e| Error::new(e, kind))
}
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let debug = (typeid::of::<E>() == typeid::of::<color_eyre::eyre::Error>())
.then(|| eyre!("{ctx}: {e:?}"));
let source = color_eyre::eyre::Error::from(e);
let ctx = format!("{}: {}", ctx, source);
let source = source.wrap_err(ctx);
let with_ctx = format!("{ctx}: {source}");
let source = source.wrap_err(with_ctx);
Error {
kind,
source,
debug,
revision: None,
task: None,
}
})
}
@@ -557,23 +613,24 @@ where
}
impl<T> ResultExt<T, Error> for Result<T, Error> {
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error {
source: e.source,
kind,
revision: e.revision,
})
self.map_err(|e| Error { kind, ..e })
}
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = e.source;
let ctx = format!("{}: {}", ctx, source);
let source = source.wrap_err(ctx);
let with_ctx = format!("{ctx}: {source}");
let source = source.wrap_err(with_ctx);
let debug = e.debug.map(|e| {
let with_ctx = format!("{ctx}: {e}");
e.wrap_err(with_ctx)
});
Error {
kind,
source,
revision: e.revision,
debug,
..e
}
})
}

View File

@@ -0,0 +1,60 @@
use std::convert::Infallible;
use std::path::Path;
use std::str::FromStr;
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use yasi::InternedString;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)]
#[ts(type = "string")]
pub struct GatewayId(InternedString);
impl GatewayId {
pub fn as_str(&self) -> &str {
&*self.0
}
}
impl From<InternedString> for GatewayId {
fn from(value: InternedString) -> Self {
Self(value)
}
}
impl From<GatewayId> for InternedString {
fn from(value: GatewayId) -> Self {
value.0
}
}
impl FromStr for GatewayId {
type Err = Infallible;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(GatewayId(InternedString::intern(s)))
}
}
impl AsRef<GatewayId> for GatewayId {
fn as_ref(&self) -> &GatewayId {
self
}
}
impl std::fmt::Display for GatewayId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", &self.0)
}
}
impl AsRef<str> for GatewayId {
fn as_ref(&self) -> &str {
self.0.as_ref()
}
}
impl AsRef<Path> for GatewayId {
fn as_ref(&self) -> &Path {
self.0.as_ref()
}
}
impl<'de> Deserialize<'de> for GatewayId {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::de::Deserializer<'de>,
{
Ok(GatewayId(serde::Deserialize::deserialize(deserializer)?))
}
}

View File

@@ -60,20 +60,3 @@ impl AsRef<Path> for HostId {
self.0.as_ref().as_ref()
}
}
impl<'q> sqlx::Encode<'q, sqlx::Postgres> for HostId {
fn encode_by_ref(
&self,
buf: &mut <sqlx::Postgres as sqlx::database::HasArguments<'q>>::ArgumentBuffer,
) -> sqlx::encode::IsNull {
<&str as sqlx::Encode<'q, sqlx::Postgres>>::encode_by_ref(&&**self, buf)
}
}
impl sqlx::Type<sqlx::Postgres> for HostId {
fn type_info() -> sqlx::postgres::PgTypeInfo {
<&str as sqlx::Type<sqlx::Postgres>>::type_info()
}
fn compatible(ty: &sqlx::postgres::PgTypeInfo) -> bool {
<&str as sqlx::Type<sqlx::Postgres>>::compatible(ty)
}
}

View File

@@ -6,6 +6,7 @@ use serde::{Deserialize, Deserializer, Serialize, Serializer};
use yasi::InternedString;
mod action;
mod gateway;
mod health_check;
mod host;
mod image;
@@ -16,6 +17,7 @@ mod service_interface;
mod volume;
pub use action::ActionId;
pub use gateway::GatewayId;
pub use health_check::HealthCheckId;
pub use host::HostId;
pub use image::ImageId;
@@ -116,20 +118,3 @@ impl Serialize for Id {
serializer.serialize_str(self)
}
}
impl<'q> sqlx::Encode<'q, sqlx::Postgres> for Id {
fn encode_by_ref(
&self,
buf: &mut <sqlx::Postgres as sqlx::database::HasArguments<'q>>::ArgumentBuffer,
) -> sqlx::encode::IsNull {
<&str as sqlx::Encode<'q, sqlx::Postgres>>::encode_by_ref(&&**self, buf)
}
}
impl sqlx::Type<sqlx::Postgres> for Id {
fn type_info() -> sqlx::postgres::PgTypeInfo {
<&str as sqlx::Type<sqlx::Postgres>>::type_info()
}
fn compatible(ty: &sqlx::postgres::PgTypeInfo) -> bool {
<&str as sqlx::Type<sqlx::Postgres>>::compatible(ty)
}
}

View File

@@ -87,20 +87,3 @@ impl Serialize for PackageId {
Serialize::serialize(&self.0, serializer)
}
}
impl<'q> sqlx::Encode<'q, sqlx::Postgres> for PackageId {
fn encode_by_ref(
&self,
buf: &mut <sqlx::Postgres as sqlx::database::HasArguments<'q>>::ArgumentBuffer,
) -> sqlx::encode::IsNull {
<&str as sqlx::Encode<'q, sqlx::Postgres>>::encode_by_ref(&&**self, buf)
}
}
impl sqlx::Type<sqlx::Postgres> for PackageId {
fn type_info() -> sqlx::postgres::PgTypeInfo {
<&str as sqlx::Type<sqlx::Postgres>>::type_info()
}
fn compatible(ty: &sqlx::postgres::PgTypeInfo) -> bool {
<&str as sqlx::Type<sqlx::Postgres>>::compatible(ty)
}
}

View File

@@ -9,6 +9,14 @@ use yasi::InternedString;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)]
#[ts(type = "string")]
pub struct ReplayId(InternedString);
impl<T> From<T> for ReplayId
where
T: Into<InternedString>,
{
fn from(value: T) -> Self {
Self(value.into())
}
}
impl FromStr for ReplayId {
type Err = Infallible;
fn from_str(s: &str) -> Result<Self, Self::Err> {

View File

@@ -8,7 +8,7 @@ use ts_rs::TS;
use crate::{FromStrParser, Id};
#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, TS)]
#[ts(export, type = "string")]
#[ts(type = "string")]
pub struct ServiceInterfaceId(Id);
impl From<Id> for ServiceInterfaceId {
fn from(id: Id) -> Self {
@@ -44,23 +44,6 @@ impl AsRef<Path> for ServiceInterfaceId {
self.0.as_ref().as_ref()
}
}
impl<'q> sqlx::Encode<'q, sqlx::Postgres> for ServiceInterfaceId {
fn encode_by_ref(
&self,
buf: &mut <sqlx::Postgres as sqlx::database::HasArguments<'q>>::ArgumentBuffer,
) -> sqlx::encode::IsNull {
<&str as sqlx::Encode<'q, sqlx::Postgres>>::encode_by_ref(&&**self, buf)
}
}
impl sqlx::Type<sqlx::Postgres> for ServiceInterfaceId {
fn type_info() -> sqlx::postgres::PgTypeInfo {
<&str as sqlx::Type<sqlx::Postgres>>::type_info()
}
fn compatible(ty: &sqlx::postgres::PgTypeInfo) -> bool {
<&str as sqlx::Type<sqlx::Postgres>>::compatible(ty)
}
}
impl FromStr for ServiceInterfaceId {
type Err = <Id as FromStr>::Err;
fn from_str(s: &str) -> Result<Self, Self::Err> {

View File

@@ -1,10 +1,11 @@
use std::borrow::Borrow;
use std::path::Path;
use std::str::FromStr;
use serde::{Deserialize, Deserializer, Serialize};
use ts_rs::TS;
use crate::Id;
use crate::{Id, InvalidId};
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, TS)]
#[ts(type = "string")]
@@ -12,6 +13,15 @@ pub enum VolumeId {
Backup,
Custom(Id),
}
impl FromStr for VolumeId {
type Err = InvalidId;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(match s {
"BACKUP" => VolumeId::Backup,
s => VolumeId::Custom(Id::try_from(s.to_owned())?),
})
}
}
impl std::fmt::Display for VolumeId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {

View File

@@ -1,164 +0,0 @@
use std::future::Future;
use std::pin::Pin;
use std::sync::Arc;
use std::time::Duration;
use color_eyre::eyre::bail;
use container_init::{Input, Output, ProcessId, RpcId};
use tokio::sync::mpsc::{UnboundedReceiver, UnboundedSender};
use tokio::sync::Mutex;
/// Used by the js-executor, it is the ability to just create a command in an already running exec
pub type ExecCommand = Arc<
dyn Fn(
String,
Vec<String>,
UnboundedSender<container_init::Output>,
Option<Duration>,
) -> Pin<Box<dyn Future<Output = Result<RpcId, String>> + 'static>>
+ Send
+ Sync
+ 'static,
>;
/// Used by the js-executor, it is the ability to just create a command in an already running exec
pub type SendKillSignal = Arc<
dyn Fn(RpcId, u32) -> Pin<Box<dyn Future<Output = Result<(), String>> + 'static>>
+ Send
+ Sync
+ 'static,
>;
pub trait CommandInserter {
fn insert_command(
&self,
command: String,
args: Vec<String>,
sender: UnboundedSender<container_init::Output>,
timeout: Option<Duration>,
) -> Pin<Box<dyn Future<Output = Option<RpcId>>>>;
fn send_signal(&self, id: RpcId, command: u32) -> Pin<Box<dyn Future<Output = ()>>>;
}
pub type ArcCommandInserter = Arc<Mutex<Option<Box<dyn CommandInserter>>>>;
pub struct ExecutingCommand {
rpc_id: RpcId,
/// Will exist until killed
command_inserter: Arc<Mutex<Option<ArcCommandInserter>>>,
owned_futures: Arc<Mutex<Vec<Pin<Box<dyn Future<Output = ()>>>>>>,
}
impl ExecutingCommand {
pub async fn new(
command_inserter: ArcCommandInserter,
command: String,
args: Vec<String>,
timeout: Option<Duration>,
) -> Result<ExecutingCommand, color_eyre::Report> {
let (sender, receiver) = tokio::sync::mpsc::unbounded_channel::<Output>();
let rpc_id = {
let locked_command_inserter = command_inserter.lock().await;
let locked_command_inserter = match &*locked_command_inserter {
Some(a) => a,
None => bail!("Expecting containers.main in the package manifest".to_string()),
};
match locked_command_inserter
.insert_command(command, args, sender, timeout)
.await
{
Some(a) => a,
None => bail!("Couldn't get command started ".to_string()),
}
};
let executing_commands = ExecutingCommand {
rpc_id,
command_inserter: Arc::new(Mutex::new(Some(command_inserter.clone()))),
owned_futures: Default::default(),
};
// let waiting = self.wait()
Ok(executing_commands)
}
async fn wait(
rpc_id: RpcId,
mut outputs: UnboundedReceiver<Output>,
) -> Result<String, (Option<i32>, String)> {
let (process_id_send, process_id_recv) = tokio::sync::oneshot::channel::<ProcessId>();
let mut answer = String::new();
let mut command_error = String::new();
let mut status: Option<i32> = None;
let mut process_id_send = Some(process_id_send);
while let Some(output) = outputs.recv().await {
match output {
Output::ProcessId(process_id) => {
if let Some(process_id_send) = process_id_send.take() {
if let Err(err) = process_id_send.send(process_id) {
tracing::error!(
"Could not get a process id {process_id:?} sent for {rpc_id:?}"
);
tracing::debug!("{err:?}");
}
}
}
Output::Line(value) => {
answer.push_str(&value);
answer.push('\n');
}
Output::Error(error) => {
command_error.push_str(&error);
command_error.push('\n');
}
Output::Done(error_code) => {
status = error_code;
break;
}
}
}
if !command_error.is_empty() {
return Err((status, command_error));
}
Ok(answer)
}
async fn send_signal(&self, signal: u32) {
let locked = self.command_inserter.lock().await;
let inner = match &*locked {
Some(a) => a,
None => return,
};
let locked = inner.lock().await;
let command_inserter = match &*locked {
Some(a) => a,
None => return,
};
command_inserter.send_signal(self.rpc_id, signal);
}
/// Should only be called when output::done
async fn killed(&self) {
*self.owned_futures.lock().await = Default::default();
*self.command_inserter.lock().await = Default::default();
}
pub fn rpc_id(&self) -> RpcId {
self.rpc_id
}
}
impl Drop for ExecutingCommand {
fn drop(&mut self) {
let command_inserter = self.command_inserter.clone();
let rpc_id = self.rpc_id.clone();
tokio::spawn(async move {
let command_inserter_lock = command_inserter.lock().await;
let command_inserter = match &*command_inserter_lock {
Some(a) => a,
None => {
return;
}
};
command_inserter.send_kill_command(rpc_id, 9).await;
});
}
}

View File

@@ -4,25 +4,15 @@ use crate::ActionId;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum ProcedureName {
GetConfig,
SetConfig,
CreateBackup,
RestoreBackup,
GetActionInput(ActionId),
RunAction(ActionId),
PackageInit,
PackageUninit,
}
impl ProcedureName {
pub fn js_function_name(&self) -> String {
match self {
ProcedureName::PackageInit => "/packageInit".to_string(),
ProcedureName::PackageUninit => "/packageUninit".to_string(),
ProcedureName::SetConfig => "/config/set".to_string(),
ProcedureName::GetConfig => "/config/get".to_string(),
ProcedureName::CreateBackup => "/backup/create".to_string(),
ProcedureName::RestoreBackup => "/backup/restore".to_string(),
ProcedureName::RunAction(id) => format!("/actions/{}/run", id),
ProcedureName::GetActionInput(id) => format!("/actions/{}/getInput", id),
}

View File

@@ -5,6 +5,11 @@ cd "$(dirname "${BASH_SOURCE[0]}")"
set -ea
shopt -s expand_aliases
PROFILE=${PROFILE:-release}
if [ "${PROFILE}" = "release" ]; then
BUILD_FLAGS="--release"
fi
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
@@ -22,15 +27,12 @@ cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
if [[ "${ENVIRONMENT}" =~ (^|-)console($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$HOME/.cargo/git":/root/.cargo/git -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
source ./core/builder-alias.sh
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
rust-musl-builder sh -c "apt-get update && apt-get install -y rsync && cd core && cargo test --release --features=test,$FEATURES --workspace --locked --target=$ARCH-unknown-linux-musl -- --skip export_bindings_ && chown \$UID:\$UID target"
if [ "$(ls -nd core/target | awk '{ print $3 }')" != "$UID" ]; then
rust-musl-builder sh -c "cd core && chown -R $UID:$UID target && chown -R $UID:$UID /root/.cargo"
fi
cross test --manifest-path=./core/Cargo.toml $BUILD_FLAGS --features=test,$FEATURES --workspace --locked --target=$ARCH-unknown-linux-musl -- --skip export_bindings_

View File

@@ -2,20 +2,20 @@
authors = ["Aiden McClelland <me@drbonez.dev>"]
description = "The core of StartOS"
documentation = "https://docs.rs/start-os"
edition = "2021"
edition = "2024"
keywords = [
"self-hosted",
"raspberry-pi",
"privacy",
"bitcoin",
"full-node",
"lightning",
"privacy",
"raspberry-pi",
"self-hosted",
]
license = "MIT"
name = "start-os"
readme = "README.md"
repository = "https://github.com/Start9Labs/start-os"
version = "0.3.6-alpha.18" # VERSION_BUMP
license = "MIT"
version = "0.4.0-alpha.12" # VERSION_BUMP
[lib]
name = "startos"
@@ -37,33 +37,66 @@ path = "src/main.rs"
name = "registrybox"
path = "src/main.rs"
[[bin]]
name = "tunnelbox"
path = "src/main.rs"
[features]
cli = []
container-runtime = ["procfs", "tty-spawn"]
daemon = ["mail-send"]
registry = []
default = ["cli", "daemon", "registry", "container-runtime"]
dev = []
unstable = ["console-subscriber", "tokio/tracing"]
arti = [
"arti-client",
"models/arti",
"safelog",
"tor-cell",
"tor-hscrypto",
"tor-hsservice",
"tor-keymgr",
"tor-llcrypto",
"tor-proto",
"tor-rtcompat",
]
cli = ["cli-registry", "cli-startd", "cli-tunnel"]
cli-container = ["procfs", "pty-process"]
cli-registry = []
cli-startd = []
cli-tunnel = []
console = ["console-subscriber", "tokio/tracing"]
default = ["cli", "cli-container", "registry", "startd", "tunnel"]
dev = ["backtrace-on-stack-overflow"]
docker = []
registry = []
startd = []
test = []
tunnel = []
unstable = ["backtrace-on-stack-overflow"]
[dependencies]
aes = { version = "0.7.5", features = ["ctr"] }
arti-client = { version = "0.33", features = [
"compression",
"ephemeral-keystore",
"experimental-api",
"onion-service-client",
"onion-service-service",
"rustls",
"static",
"tokio",
], default-features = false, git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
async-acme = { version = "0.6.0", git = "https://github.com/dr-bonez/async-acme.git", features = [
"use_rustls",
"use_tokio",
] }
async-compression = { version = "0.4.4", features = [
"gzip",
async-compression = { version = "0.4.32", features = [
"brotli",
"gzip",
"tokio",
"zstd",
] }
async-stream = "0.3.5"
async-trait = "0.1.74"
axum = { version = "0.7.3", features = ["ws"] }
aws-lc-sys = { version = "0.32", features = ["bindgen"] }
axum = { version = "0.8.4", features = ["ws"] }
backtrace-on-stack-overflow = { version = "0.3.0", optional = true }
barrage = "0.2.3"
backhand = "0.18.0"
base32 = "0.5.0"
base64 = "0.22.1"
base64ct = "1.6.0"
@@ -74,20 +107,23 @@ chrono = { version = "0.4.31", features = ["serde"] }
clap = { version = "4.4.12", features = ["string"] }
color-eyre = "0.6.2"
console = "0.15.7"
console-subscriber = { version = "0.3.0", optional = true }
console-subscriber = { version = "0.4.1", optional = true }
const_format = "0.2.34"
cookie = "0.18.0"
cookie_store = "0.21.0"
curve25519-dalek = "4.1.3"
der = { version = "0.7.9", features = ["derive", "pem"] }
digest = "0.10.7"
divrem = "1.0.0"
ed25519 = { version = "2.2.3", features = ["pkcs8", "pem", "alloc"] }
ed25519-dalek = { version = "2.1.1", features = [
dns-lookup = "2.1.0"
ed25519 = { version = "2.2.3", features = ["alloc", "pem", "pkcs8"] }
ed25519-dalek = { version = "2.2.0", features = [
"digest",
"hazmat",
"pkcs8",
"rand_core",
"serde",
"zeroize",
"rand_core",
"digest",
"pkcs8",
] }
ed25519-dalek-v1 = { package = "ed25519-dalek", version = "1" }
exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git", features = [
@@ -96,50 +132,63 @@ exver = { version = "0.2.0", git = "https://github.com/Start9Labs/exver-rs.git",
fd-lock-rs = "0.1.4"
form_urlencoded = "1.2.1"
futures = "0.3.28"
gpt = "3.1.0"
gpt = "4.1.0"
helpers = { path = "../helpers" }
hex = "0.4.3"
hickory-client = "0.25.2"
hickory-server = "0.25.2"
hmac = "0.12.1"
http = "1.0.0"
http-body-util = "0.1"
hyper = { version = "1.5", features = ["server", "http1", "http2"] }
hyper = { version = "1.5", features = ["http1", "http2", "server"] }
hyper-util = { version = "0.1.10", features = [
"http1",
"http2",
"server",
"server-auto",
"server-graceful",
"service",
"http1",
"http2",
"tokio",
] }
id-pool = { version = "0.2.2", default-features = false, features = [
"serde",
"u16",
] }
imbl = "2.0.3"
imbl-value = "0.1.2"
iddqd = "0.3.14"
imbl = { version = "6", features = ["serde", "small-chunks"] }
imbl-value = { version = "0.4.3", features = ["ts-rs"] }
include_dir = { version = "0.7.3", features = ["metadata"] }
indexmap = { version = "2.0.2", features = ["serde"] }
indicatif = { version = "0.17.7", features = ["tokio"] }
inotify = "0.11.0"
integer-encoding = { version = "4.0.0", features = ["tokio_async"] }
ipnet = { version = "2.8.0", features = ["serde"] }
iprange = { version = "0.6.7", features = ["serde"] }
isocountry = "0.3.2"
itertools = "0.13.0"
itertools = "0.14.0"
jaq-core = "0.10.1"
jaq-std = "0.10.0"
josekit = "0.8.4"
josekit = "0.10.3"
jsonpath_lib = { git = "https://github.com/Start9Labs/jsonpath.git" }
lazy_async_pool = "0.3.3"
lazy_format = "2.0"
lazy_static = "1.4.0"
lettre = { version = "0.11.18", default-features = false, features = [
"aws-lc-rs",
"builder",
"hostname",
"pool",
"rustls-platform-verifier",
"smtp-transport",
"tokio1-rustls",
] }
libc = "0.2.149"
log = "0.4.20"
mbrman = "0.6.0"
miette = { version = "7.6.0", features = ["fancy"] }
mio = "1"
mbrman = "0.5.2"
models = { version = "*", path = "../models" }
new_mime_guess = "4"
nix = { version = "0.29.0", features = [
nix = { version = "0.30.1", features = [
"fs",
"mount",
"net",
@@ -148,10 +197,10 @@ nix = { version = "0.29.0", features = [
"signal",
"user",
] }
nom = "7.1.3"
nom = "8.0.0"
num = "0.4.1"
num_enum = "0.7.0"
num_cpus = "1.16.0"
num_enum = "0.7.0"
once_cell = "1.19.0"
openssh-keys = "0.6.2"
openssl = { version = "0.10.57", features = ["vendored"] }
@@ -163,70 +212,84 @@ pbkdf2 = "0.12.2"
pin-project = "1.1.3"
pkcs8 = { version = "0.10.2", features = ["std"] }
prettytable-rs = "0.10.0"
procfs = { version = "0.16.0", optional = true }
procfs = { version = "0.17.0", optional = true }
proptest = "1.3.1"
proptest-derive = "0.5.0"
pty-process = { version = "0.5.1", optional = true }
qrcode = "0.14.1"
rand = "0.9.0"
r3bl_tui = "0.7.6"
rand = "0.9.2"
regex = "1.10.2"
reqwest = { version = "0.12.4", features = ["stream", "json", "socks"] }
reqwest = { version = "0.12.4", features = ["json", "socks", "stream"] }
reqwest_cookie_store = "0.8.0"
rpassword = "7.2.0"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "master" }
rust-argon2 = "2.0.0"
rustyline-async = "0.4.1"
safelog = { version = "0.4.8", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" }
serde_json = "1.0"
serde_toml = { package = "toml", version = "0.8.2" }
serde_urlencoded = "0.7"
serde_with = { version = "3.4.0", features = ["macros", "json"] }
serde_yaml = { package = "serde_yml", version = "0.0.10" }
serde_with = { version = "3.4.0", features = ["json", "macros"] }
serde_yaml = { package = "serde_yml", version = "0.0.12" }
sha-crypt = "0.5.0"
sha2 = "0.10.2"
shell-words = "1"
signal-hook = "0.3.17"
simple-logging = "2.0.2"
socket2 = "0.5.7"
sqlx = { version = "0.7.2", features = [
"chrono",
"runtime-tokio-rustls",
socket2 = { version = "0.6.0", features = ["all"] }
socks5-impl = { version = "0.7.2", features = ["client", "server"] }
sqlx = { version = "0.8.6", features = [
"postgres",
] }
"runtime-tokio-rustls",
], default-features = false }
sscanf = "0.4.1"
ssh-key = { version = "0.6.2", features = ["ed25519"] }
tar = "0.4.40"
thiserror = "1.0.49"
termion = "4.0.5"
textwrap = "0.16.1"
thiserror = "2.0.12"
tokio = { version = "1.38.1", features = ["full"] }
tokio-rustls = "0.26.0"
tokio-socks = "0.5.1"
tokio-stream = { version = "0.1.14", features = ["io-util", "sync", "net"] }
tokio-stream = { version = "0.1.14", features = ["io-util", "net", "sync"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = { version = "0.23.1", features = ["native-tls", "url"] }
tokio-tungstenite = { version = "0.26.2", features = ["native-tls", "url"] }
tokio-util = { version = "0.7.9", features = ["io"] }
torut = { git = "https://github.com/Start9Labs/torut.git", branch = "update/dependencies", features = [
"serialize",
] }
tor-cell = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hscrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-hsservice = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-keymgr = { version = "0.33", features = [
"ephemeral-keystore",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-llcrypto = { version = "0.33", features = [
"full",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-proto = { version = "0.33", git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
tor-rtcompat = { version = "0.33", features = [
"rustls",
"tokio",
], git = "https://github.com/Start9Labs/arti.git", branch = "patch/disable-exit", optional = true }
torut = "0.2.1"
tower-service = "0.3.3"
tracing = "0.1.39"
tracing-error = "0.2.0"
tracing-futures = "0.2.5"
tracing-journald = "0.3.0"
tracing-subscriber = { version = "0.3.17", features = ["env-filter"] }
trust-dns-server = "0.23.1"
ts-rs = { git = "https://github.com/dr-bonez/ts-rs.git", branch = "feature/top-level-as" } # "8.1.0"
tty-spawn = { version = "0.4.0", optional = true }
typed-builder = "0.18.0"
ts-rs = { version = "9.0.1", features = ["chrono-impl"] }
typed-builder = "0.21.0"
unix-named-pipe = "0.2.0"
url = { version = "2.4.1", features = ["serde"] }
urlencoding = "2.1.3"
uuid = { version = "1.4.1", features = ["v4"] }
visit-rs = "0.1.1"
x25519-dalek = { version = "2.0.1", features = ["static_secrets"] }
zbus = "5.1.1"
zeroize = "1.6.0"
mail-send = { git = "https://github.com/dr-bonez/mail-send.git", branch = "main", optional = true }
rustls = "0.23.20"
rustls-pki-types = { version = "1.10.1", features = ["alloc"] }
[profile.test]
opt-level = 3

View File

@@ -1,12 +1,14 @@
use std::collections::BTreeMap;
use std::time::SystemTime;
use imbl_value::InternedString;
use openssl::pkey::{PKey, Private};
use openssl::x509::X509;
use torut::onion::TorSecretKeyV3;
use crate::db::model::DatabaseModel;
use crate::hostname::{generate_hostname, generate_id, Hostname};
use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::ssl::{generate_key, make_root_cert};
use crate::net::tor::TorSecretKey;
use crate::prelude::*;
use crate::util::serde::Pem;
@@ -19,28 +21,28 @@ fn hash_password(password: &str) -> Result<String, Error> {
.with_kind(crate::ErrorKind::PasswordHashGeneration)
}
#[derive(Debug, Clone)]
#[derive(Clone)]
pub struct AccountInfo {
pub server_id: String,
pub hostname: Hostname,
pub password: String,
pub tor_keys: Vec<TorSecretKeyV3>,
pub tor_keys: Vec<TorSecretKey>,
pub root_ca_key: PKey<Private>,
pub root_ca_cert: X509,
pub ssh_key: ssh_key::PrivateKey,
pub compat_s9pk_key: ed25519_dalek::SigningKey,
pub developer_key: ed25519_dalek::SigningKey,
}
impl AccountInfo {
pub fn new(password: &str, start_time: SystemTime) -> Result<Self, Error> {
let server_id = generate_id();
let hostname = generate_hostname();
let tor_key = vec![TorSecretKeyV3::generate()];
let tor_key = vec![TorSecretKey::generate()];
let root_ca_key = generate_key()?;
let root_ca_cert = make_root_cert(&root_ca_key, &hostname, start_time)?;
let ssh_key = ssh_key::PrivateKey::from(ssh_key::private::Ed25519Keypair::random(
&mut ssh_key::rand_core::OsRng::default(),
));
let compat_s9pk_key =
let developer_key =
ed25519_dalek::SigningKey::generate(&mut ssh_key::rand_core::OsRng::default());
Ok(Self {
server_id,
@@ -50,7 +52,7 @@ impl AccountInfo {
root_ca_key,
root_ca_cert,
ssh_key,
compat_s9pk_key,
developer_key,
})
}
@@ -59,7 +61,13 @@ impl AccountInfo {
let hostname = Hostname(db.as_public().as_server_info().as_hostname().de()?);
let password = db.as_private().as_password().de()?;
let key_store = db.as_private().as_key_store();
let tor_addrs = db.as_public().as_server_info().as_host().as_onions().de()?;
let tor_addrs = db
.as_public()
.as_server_info()
.as_network()
.as_host()
.as_onions()
.de()?;
let tor_keys = tor_addrs
.into_iter()
.map(|tor_addr| key_store.as_onion().get_key(&tor_addr))
@@ -68,7 +76,7 @@ impl AccountInfo {
let root_ca_key = cert_store.as_root_key().de()?.0;
let root_ca_cert = cert_store.as_root_cert().de()?.0;
let ssh_key = db.as_private().as_ssh_privkey().de()?.0;
let compat_s9pk_key = db.as_private().as_compat_s9pk_key().de()?.0;
let compat_s9pk_key = db.as_private().as_developer_key().de()?.0;
Ok(Self {
server_id,
@@ -78,7 +86,7 @@ impl AccountInfo {
root_ca_key,
root_ca_cert,
ssh_key,
compat_s9pk_key,
developer_key: compat_s9pk_key,
})
}
@@ -89,31 +97,44 @@ impl AccountInfo {
server_info
.as_pubkey_mut()
.ser(&self.ssh_key.public_key().to_openssh()?)?;
server_info.as_host_mut().as_onions_mut().ser(
&self
.tor_keys
.iter()
.map(|tor_key| tor_key.public().get_onion_address())
.collect(),
)?;
server_info
.as_network_mut()
.as_host_mut()
.as_onions_mut()
.ser(
&self
.tor_keys
.iter()
.map(|tor_key| tor_key.onion_address())
.collect(),
)?;
server_info.as_password_hash_mut().ser(&self.password)?;
db.as_private_mut().as_password_mut().ser(&self.password)?;
db.as_private_mut()
.as_ssh_privkey_mut()
.ser(Pem::new_ref(&self.ssh_key))?;
db.as_private_mut()
.as_compat_s9pk_key_mut()
.ser(Pem::new_ref(&self.compat_s9pk_key))?;
.as_developer_key_mut()
.ser(Pem::new_ref(&self.developer_key))?;
let key_store = db.as_private_mut().as_key_store_mut();
for tor_key in &self.tor_keys {
key_store.as_onion_mut().insert_key(tor_key)?;
}
let cert_store = key_store.as_local_certs_mut();
cert_store
.as_root_key_mut()
.ser(Pem::new_ref(&self.root_ca_key))?;
cert_store
.as_root_cert_mut()
.ser(Pem::new_ref(&self.root_ca_cert))?;
if cert_store.as_root_cert().de()?.0 != self.root_ca_cert {
cert_store
.as_root_key_mut()
.ser(Pem::new_ref(&self.root_ca_key))?;
cert_store
.as_root_cert_mut()
.ser(Pem::new_ref(&self.root_ca_cert))?;
let int_key = crate::net::ssl::generate_key()?;
let int_cert =
crate::net::ssl::make_int_cert((&self.root_ca_key, &self.root_ca_cert), &int_key)?;
cert_store.as_int_key_mut().ser(&Pem(int_key))?;
cert_store.as_int_cert_mut().ser(&Pem(int_cert))?;
cert_store.as_leaves_mut().ser(&BTreeMap::new())?;
}
Ok(())
}
@@ -121,4 +142,17 @@ impl AccountInfo {
self.password = hash_password(password)?;
Ok(())
}
pub fn hostnames(&self) -> impl IntoIterator<Item = InternedString> + Send + '_ {
[
self.hostname.no_dot_host_name(),
self.hostname.local_domain_name(),
]
.into_iter()
.chain(
self.tor_keys
.iter()
.map(|k| InternedString::from_display(&k.onion_address())),
)
}
}

View File

@@ -2,7 +2,7 @@ use std::fmt;
use clap::{CommandFactory, FromArgMatches, Parser};
pub use models::ActionId;
use models::PackageId;
use models::{PackageId, ReplayId};
use qrcode::QrCode;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
@@ -10,12 +10,26 @@ use tracing::instrument;
use ts_rs::TS;
use crate::context::{CliContext, RpcContext};
use crate::db::model::package::TaskSeverity;
use crate::prelude::*;
use crate::rpc_continuations::Guid;
use crate::util::serde::{
display_serializable, HandlerExtSerde, StdinDeserializable, WithIoFormat,
};
#[test]
fn export_bindings_action() {
use crate::db::model::package::{ActionMetadata, Task};
const OUT_DIR: &str = "./bindings/action";
ActionId::export_all_to(OUT_DIR).unwrap();
ActionInput::export_all_to(OUT_DIR).unwrap();
ActionResult::export_all_to(OUT_DIR).unwrap();
ActionMetadata::export_all_to(OUT_DIR).unwrap();
Task::export_all_to(OUT_DIR).unwrap();
}
pub fn action_api<C: Context>() -> ParentHandler<C> {
ParentHandler::new()
.subcommand(
@@ -38,12 +52,20 @@ pub fn action_api<C: Context>() -> ParentHandler<C> {
.with_about("Run service action")
.with_call_remote::<CliContext>(),
)
.subcommand(
"clear-task",
from_fn_async(clear_task)
.no_display()
.with_about("Clear a service task")
.with_call_remote::<CliContext>(),
)
}
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub struct ActionInput {
#[serde(default)]
pub event_id: Guid,
#[ts(type = "Record<string, unknown>")]
pub spec: Value,
#[ts(type = "Record<string, unknown> | null")]
@@ -76,13 +98,34 @@ pub async fn get_action_input(
#[derive(Debug, Serialize, Deserialize, TS)]
#[serde(tag = "version")]
#[ts(export)]
pub enum ActionResult {
#[serde(rename = "0")]
V0(ActionResultV0),
#[serde(rename = "1")]
V1(ActionResultV1),
}
impl ActionResult {
pub fn upcast(self) -> Self {
match self {
Self::V0(ActionResultV0 {
message,
value,
copyable,
qr,
}) => Self::V1(ActionResultV1 {
title: "Action Complete".into(),
message: Some(message),
result: value.map(|value| ActionResultValue::Single {
value,
copyable,
qr,
masked: false,
}),
}),
Self::V1(a) => Self::V1(a),
}
}
}
impl fmt::Display for ActionResult {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
@@ -222,20 +265,25 @@ impl fmt::Display for ActionResultV1 {
}
}
pub fn display_action_result<T: Serialize>(params: WithIoFormat<T>, result: Option<ActionResult>) {
pub fn display_action_result<T: Serialize>(
params: WithIoFormat<T>,
result: Option<ActionResult>,
) -> Result<(), Error> {
let Some(result) = result else {
return;
return Ok(());
};
if let Some(format) = params.format {
return display_serializable(format, result);
}
println!("{result}")
println!("{result}");
Ok(())
}
#[derive(Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
pub struct RunActionParams {
pub package_id: PackageId,
pub event_id: Option<Guid>,
pub action_id: ActionId,
#[ts(optional, type = "any")]
pub input: Option<Value>,
@@ -244,6 +292,7 @@ pub struct RunActionParams {
#[derive(Parser)]
struct CliRunActionParams {
pub package_id: PackageId,
pub event_id: Option<Guid>,
pub action_id: ActionId,
#[command(flatten)]
pub input: StdinDeserializable<Option<Value>>,
@@ -252,12 +301,14 @@ impl From<CliRunActionParams> for RunActionParams {
fn from(
CliRunActionParams {
package_id,
event_id,
action_id,
input,
}: CliRunActionParams,
) -> Self {
Self {
package_id,
event_id,
action_id,
input: input.0,
}
@@ -297,6 +348,7 @@ pub async fn run_action(
ctx: RpcContext,
RunActionParams {
package_id,
event_id,
action_id,
input,
}: RunActionParams,
@@ -306,6 +358,54 @@ pub async fn run_action(
.await
.as_ref()
.or_not_found(lazy_format!("Manager for {}", package_id))?
.run_action(Guid::new(), action_id, input.unwrap_or_default())
.run_action(
event_id.unwrap_or_default(),
action_id,
input.unwrap_or_default(),
)
.await
.map(|res| res.map(ActionResult::upcast))
}
#[derive(Deserialize, Serialize, Parser, TS)]
#[serde(rename_all = "camelCase")]
#[command(rename_all = "kebab-case")]
pub struct ClearTaskParams {
pub package_id: PackageId,
pub replay_id: ReplayId,
#[arg(long)]
#[serde(default)]
pub force: bool,
}
#[instrument(skip_all)]
pub async fn clear_task(
ctx: RpcContext,
ClearTaskParams {
package_id,
replay_id,
force,
}: ClearTaskParams,
) -> Result<(), Error> {
ctx.db
.mutate(|db| {
if let Some(task) = db
.as_public_mut()
.as_package_data_mut()
.as_idx_mut(&package_id)
.or_not_found(&package_id)?
.as_tasks_mut()
.remove(&replay_id)?
{
if !force && task.as_task().as_severity().de()? == TaskSeverity::Critical {
return Err(Error::new(
eyre!("Cannot clear critical task"),
ErrorKind::InvalidRequest,
));
}
}
Ok(())
})
.await
.result
}

View File

@@ -3,27 +3,27 @@ use std::collections::BTreeMap;
use chrono::{DateTime, Utc};
use clap::Parser;
use color_eyre::eyre::eyre;
use imbl_value::{json, InternedString};
use imbl_value::{InternedString, json};
use itertools::Itertools;
use josekit::jwk::Jwk;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{from_fn_async, Context, HandlerArgs, HandlerExt, ParentHandler};
use rpc_toolkit::{CallRemote, Context, HandlerArgs, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use tokio::io::AsyncWriteExt;
use tracing::instrument;
use ts_rs::TS;
use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel;
use crate::middleware::auth::{
AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken, LoginRes,
AsLogoutSessionId, AuthContext, HasLoggedOutSessions, HashSessionToken, LoginRes,
};
use crate::prelude::*;
use crate::util::crypto::EncryptedWire;
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat};
use crate::{ensure_code, Error, ResultExt};
use crate::util::io::create_file_mod;
use crate::util::serde::{HandlerExtSerde, WithIoFormat, display_serializable};
use crate::{Error, ResultExt, ensure_code};
#[derive(Debug, Clone, Default, Deserialize, Serialize, TS)]
#[ts(as = "BTreeMap::<String, Session>")]
pub struct Sessions(pub BTreeMap<InternedString, Session>);
impl Sessions {
pub fn new() -> Self {
@@ -41,9 +41,35 @@ impl Map for Sessions {
}
}
pub async fn write_shadow(password: &str) -> Result<(), Error> {
let hash: String = sha_crypt::sha512_simple(password, &sha_crypt::Sha512Params::default())
.map_err(|e| Error::new(eyre!("{e:?}"), ErrorKind::Serialization))?;
let shadow_contents = tokio::fs::read_to_string("/etc/shadow").await?;
let mut shadow_file =
create_file_mod("/media/startos/config/overlay/etc/shadow", 0o640).await?;
for line in shadow_contents.lines() {
match line.split_once(":") {
Some((user, rest)) if user == "start9" || user == "kiosk" => {
let (_, rest) = rest.split_once(":").ok_or_else(|| {
Error::new(eyre!("malformed /etc/shadow"), ErrorKind::ParseSysInfo)
})?;
shadow_file
.write_all(format!("{user}:{hash}:{rest}\n").as_bytes())
.await?;
}
_ => {
shadow_file.write_all(line.as_bytes()).await?;
shadow_file.write_all(b"\n").await?;
}
}
}
shadow_file.sync_all().await?;
tokio::fs::copy("/media/startos/config/overlay/etc/shadow", "/etc/shadow").await?;
Ok(())
}
#[derive(Clone, Serialize, Deserialize, TS)]
#[serde(untagged)]
#[ts(export)]
pub enum PasswordType {
EncryptedWire(EncryptedWire),
String(String),
@@ -83,31 +109,34 @@ impl std::str::FromStr for PasswordType {
})
}
}
pub fn auth<C: Context>() -> ParentHandler<C> {
pub fn auth<C: Context, AC: AuthContext>() -> ParentHandler<C>
where
CliContext: CallRemote<AC>,
{
ParentHandler::new()
.subcommand(
"login",
from_fn_async(login_impl)
from_fn_async(login_impl::<AC>)
.with_metadata("login", Value::Bool(true))
.no_cli(),
)
.subcommand(
"login",
from_fn_async(cli_login)
from_fn_async(cli_login::<AC>)
.no_display()
.with_about("Log in to StartOS server"),
.with_about("Log in a new auth session"),
)
.subcommand(
"logout",
from_fn_async(logout)
from_fn_async(logout::<AC>)
.with_metadata("get_session", Value::Bool(true))
.no_display()
.with_about("Log out of StartOS server")
.with_about("Log out of current auth session")
.with_call_remote::<CliContext>(),
)
.subcommand(
"session",
session::<C>().with_about("List or kill StartOS sessions"),
session::<C, AC>().with_about("List or kill auth sessions"),
)
.subcommand(
"reset-password",
@@ -117,15 +146,7 @@ pub fn auth<C: Context>() -> ParentHandler<C> {
"reset-password",
from_fn_async(cli_reset_password)
.no_display()
.with_about("Reset StartOS password"),
)
.subcommand(
"get-pubkey",
from_fn_async(get_pubkey)
.with_metadata("authenticated", Value::Bool(false))
.no_display()
.with_about("Get public key derived from server private key")
.with_call_remote::<CliContext>(),
.with_about("Reset password"),
)
}
@@ -143,17 +164,20 @@ fn gen_pwd() {
}
#[instrument(skip_all)]
async fn cli_login(
async fn cli_login<C: AuthContext>(
HandlerArgs {
context: ctx,
parent_method,
method,
..
}: HandlerArgs<CliContext>,
) -> Result<(), RpcError> {
) -> Result<(), RpcError>
where
CliContext: CallRemote<C>,
{
let password = rpassword::prompt_password("Password: ")?;
ctx.call_remote::<RpcContext>(
ctx.call_remote::<C>(
&parent_method.into_iter().chain(method).join("."),
json!({
"password": password,
@@ -181,66 +205,51 @@ pub fn check_password(hash: &str, password: &str) -> Result<(), Error> {
Ok(())
}
pub fn check_password_against_db(db: &DatabaseModel, password: &str) -> Result<(), Error> {
let pw_hash = db.as_private().as_password().de()?;
check_password(&pw_hash, password)?;
Ok(())
}
#[derive(Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct LoginParams {
password: Option<PasswordType>,
password: String,
#[ts(skip)]
#[serde(rename = "__auth_userAgent")] // from Auth middleware
#[serde(rename = "__Auth_userAgent")] // from Auth middleware
user_agent: Option<String>,
#[serde(default)]
ephemeral: bool,
#[serde(default)]
#[ts(type = "any")]
metadata: Value,
}
#[instrument(skip_all)]
pub async fn login_impl(
ctx: RpcContext,
pub async fn login_impl<C: AuthContext>(
ctx: C,
LoginParams {
password,
user_agent,
ephemeral,
metadata,
}: LoginParams,
) -> Result<LoginRes, Error> {
let password = password.unwrap_or_default().decrypt(&ctx)?;
if ephemeral {
check_password_against_db(&ctx.db.peek().await, &password)?;
let tok = if ephemeral {
C::check_password(&ctx.db().peek().await, &password)?;
let hash_token = HashSessionToken::new();
ctx.ephemeral_sessions.mutate(|s| {
ctx.ephemeral_sessions().mutate(|s| {
s.0.insert(
hash_token.hashed().clone(),
Session {
logged_in: Utc::now(),
last_active: Utc::now(),
user_agent,
metadata,
},
)
});
Ok(hash_token.to_login_res())
} else {
ctx.db
ctx.db()
.mutate(|db| {
check_password_against_db(db, &password)?;
C::check_password(db, &password)?;
let hash_token = HashSessionToken::new();
db.as_private_mut().as_sessions_mut().insert(
C::access_sessions(db).insert(
hash_token.hashed(),
&Session {
logged_in: Utc::now(),
last_active: Utc::now(),
user_agent,
metadata,
},
)?;
@@ -248,7 +257,11 @@ pub async fn login_impl(
})
.await
.result
}
}?;
ctx.post_login_hook(&password).await?;
Ok(tok)
}
#[derive(Deserialize, Serialize, Parser, TS)]
@@ -256,12 +269,12 @@ pub async fn login_impl(
#[command(rename_all = "kebab-case")]
pub struct LogoutParams {
#[ts(skip)]
#[serde(rename = "__auth_session")] // from Auth middleware
#[serde(rename = "__Auth_session")] // from Auth middleware
session: InternedString,
}
pub async fn logout(
ctx: RpcContext,
pub async fn logout<C: AuthContext>(
ctx: C,
LogoutParams { session }: LogoutParams,
) -> Result<Option<HasLoggedOutSessions>, Error> {
Ok(Some(
@@ -271,50 +284,46 @@ pub async fn logout(
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct Session {
#[ts(type = "string")]
pub logged_in: DateTime<Utc>,
#[ts(type = "string")]
pub last_active: DateTime<Utc>,
#[ts(skip)]
pub user_agent: Option<String>,
#[ts(type = "any")]
pub metadata: Value,
}
#[derive(Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct SessionList {
#[ts(type = "string | null")]
current: Option<InternedString>,
sessions: Sessions,
}
pub fn session<C: Context>() -> ParentHandler<C> {
pub fn session<C: Context, AC: AuthContext>() -> ParentHandler<C>
where
CliContext: CallRemote<AC>,
{
ParentHandler::new()
.subcommand(
"list",
from_fn_async(list)
from_fn_async(list::<AC>)
.with_metadata("get_session", Value::Bool(true))
.with_display_serializable()
.with_custom_display_fn(|handle, result| {
Ok(display_sessions(handle.params, result))
})
.with_about("Display all server sessions")
.with_custom_display_fn(|handle, result| display_sessions(handle.params, result))
.with_about("Display all auth sessions")
.with_call_remote::<CliContext>(),
)
.subcommand(
"kill",
from_fn_async(kill)
from_fn_async(kill::<AC>)
.no_display()
.with_about("Terminate existing server session(s)")
.with_about("Terminate existing auth session(s)")
.with_call_remote::<CliContext>(),
)
}
fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) {
fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) -> Result<(), Error> {
use prettytable::*;
if let Some(format) = params.format {
@@ -327,7 +336,6 @@ fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) {
"LOGGED IN",
"LAST ACTIVE",
"USER AGENT",
"METADATA",
]);
for (id, session) in arg.sessions.0 {
let mut row = row![
@@ -335,7 +343,6 @@ fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) {
&format!("{}", session.logged_in),
&format!("{}", session.last_active),
session.user_agent.as_deref().unwrap_or("N/A"),
&format!("{}", session.metadata),
];
if Some(id) == arg.current {
row.iter_mut()
@@ -344,7 +351,8 @@ fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) {
}
table.add_row(row);
}
table.print_tty(false).unwrap();
table.print_tty(false)?;
Ok(())
}
#[derive(Deserialize, Serialize, Parser, TS)]
@@ -353,18 +361,18 @@ fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) {
pub struct ListParams {
#[arg(skip)]
#[ts(skip)]
#[serde(rename = "__auth_session")] // from Auth middleware
#[serde(rename = "__Auth_session")] // from Auth middleware
session: Option<InternedString>,
}
// #[command(display(display_sessions))]
#[instrument(skip_all)]
pub async fn list(
ctx: RpcContext,
pub async fn list<C: AuthContext>(
ctx: C,
ListParams { session, .. }: ListParams,
) -> Result<SessionList, Error> {
let mut sessions = ctx.db.peek().await.into_private().into_sessions().de()?;
ctx.ephemeral_sessions.peek(|s| {
let mut sessions = C::access_sessions(&mut ctx.db().peek().await).de()?;
ctx.ephemeral_sessions().peek(|s| {
sessions
.0
.extend(s.0.iter().map(|(k, v)| (k.clone(), v.clone())))
@@ -375,7 +383,7 @@ pub async fn list(
})
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
struct KillSessionId(InternedString);
impl KillSessionId {
@@ -398,7 +406,7 @@ pub struct KillParams {
}
#[instrument(skip_all)]
pub async fn kill(ctx: RpcContext, KillParams { ids }: KillParams) -> Result<(), Error> {
pub async fn kill<C: AuthContext>(ctx: C, KillParams { ids }: KillParams) -> Result<(), Error> {
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId::new), &ctx).await?;
Ok(())
}
@@ -454,30 +462,19 @@ pub async fn reset_password_impl(
let old_password = old_password.unwrap_or_default().decrypt(&ctx)?;
let new_password = new_password.unwrap_or_default().decrypt(&ctx)?;
let mut account = ctx.account.write().await;
if !argon2::verify_encoded(&account.password, old_password.as_bytes())
.with_kind(crate::ErrorKind::IncorrectPassword)?
{
return Err(Error::new(
eyre!("Incorrect Password"),
crate::ErrorKind::IncorrectPassword,
));
}
account.set_password(&new_password)?;
let account_password = &account.password;
let account = account.clone();
ctx.db
.mutate(|d| {
d.as_public_mut()
.as_server_info_mut()
.as_password_hash_mut()
.ser(account_password)?;
account.save(d)?;
Ok(())
})
.await
.result
let account = ctx.account.mutate(|account| {
if !argon2::verify_encoded(&account.password, old_password.as_bytes())
.with_kind(crate::ErrorKind::IncorrectPassword)?
{
return Err(Error::new(
eyre!("Incorrect Password"),
crate::ErrorKind::IncorrectPassword,
));
}
account.set_password(&new_password)?;
Ok(account.clone())
})?;
ctx.db.mutate(|d| account.save(d)).await.result
}
#[instrument(skip_all)]

View File

@@ -13,9 +13,8 @@ use tokio::io::AsyncWriteExt;
use tracing::instrument;
use ts_rs::TS;
use super::target::{BackupTargetId, PackageBackupInfo};
use super::PackageBackupReport;
use crate::auth::check_password_against_db;
use super::target::{BackupTargetId, PackageBackupInfo};
use crate::backup::os::OsBackup;
use crate::backup::{BackupReport, ServerBackupReport};
use crate::context::RpcContext;
@@ -24,7 +23,8 @@ use crate::db::model::{Database, DatabaseModel};
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::notifications::{notify, NotificationLevel};
use crate::middleware::auth::AuthContext;
use crate::notifications::{NotificationLevel, notify};
use crate::prelude::*;
use crate::util::io::dir_copy;
use crate::util::serde::IoFormat;
@@ -170,7 +170,7 @@ pub async fn backup_all(
let ((fs, package_ids, server_id), status_guard) = (
ctx.db
.mutate(|db| {
check_password_against_db(db, &password)?;
RpcContext::check_password(db, &password)?;
let fs = target_id.load(db)?;
let package_ids = if let Some(ids) = package_ids {
ids.into_iter().collect()
@@ -223,18 +223,7 @@ fn assure_backing_up<'a>(
.as_server_info_mut()
.as_status_info_mut()
.as_backup_progress_mut();
if backing_up
.clone()
.de()?
.iter()
.flat_map(|x| x.values())
.fold(false, |acc, x| {
if !x.complete {
return true;
}
acc
})
{
if backing_up.transpose_ref().is_some() {
return Err(Error::new(
eyre!("Server is already backing up!"),
ErrorKind::InvalidRequest,
@@ -287,6 +276,22 @@ async fn perform_backup(
timestamp: Utc::now(),
},
);
ctx.db
.mutate(|db| {
if let Some(progress) = db
.as_public_mut()
.as_server_info_mut()
.as_status_info_mut()
.as_backup_progress_mut()
.transpose_mut()
{
progress.insert(&id, &BackupProgress { complete: true })?;
}
Ok(())
})
.await
.result?;
}
backup_report.insert(
id.clone(),
@@ -312,7 +317,7 @@ async fn perform_backup(
.with_kind(ErrorKind::Filesystem)?;
os_backup_file
.write_all(&IoFormat::Json.to_vec(&OsBackup {
account: ctx.account.read().await.clone(),
account: ctx.account.peek(|a| a.clone()),
ui,
})?)
.await?;
@@ -337,7 +342,7 @@ async fn perform_backup(
let timestamp = Utc::now();
backup_guard.unencrypted_metadata.version = crate::version::Current::default().semver().into();
backup_guard.unencrypted_metadata.hostname = ctx.account.read().await.hostname.clone();
backup_guard.unencrypted_metadata.hostname = ctx.account.peek(|a| a.hostname.clone());
backup_guard.unencrypted_metadata.timestamp = timestamp.clone();
backup_guard.metadata.version = crate::version::Current::default().semver().into();
backup_guard.metadata.timestamp = Some(timestamp);

View File

@@ -3,7 +3,7 @@ use std::collections::BTreeMap;
use chrono::{DateTime, Utc};
use models::{HostId, PackageId};
use reqwest::Url;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler};
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use crate::context::CliContext;

View File

@@ -4,10 +4,10 @@ use openssl::x509::X509;
use patch_db::Value;
use serde::{Deserialize, Serialize};
use ssh_key::private::Ed25519Keypair;
use torut::onion::TorSecretKeyV3;
use crate::account::AccountInfo;
use crate::hostname::{generate_hostname, generate_id, Hostname};
use crate::hostname::{Hostname, generate_hostname, generate_id};
use crate::net::tor::TorSecretKey;
use crate::prelude::*;
use crate::util::crypto::ed25519_expand_key;
use crate::util::serde::{Base32, Base64, Pem};
@@ -36,7 +36,7 @@ impl<'de> Deserialize<'de> for OsBackup {
v => {
return Err(serde::de::Error::custom(&format!(
"Unknown backup version {v}"
)))
)));
}
})
}
@@ -85,8 +85,11 @@ impl OsBackupV0 {
&mut ssh_key::rand_core::OsRng::default(),
ssh_key::Algorithm::Ed25519,
)?,
tor_keys: vec![TorSecretKeyV3::from(self.tor_key.0)],
compat_s9pk_key: ed25519_dalek::SigningKey::generate(
tor_keys: TorSecretKey::from_bytes(self.tor_key.0)
.ok()
.into_iter()
.collect(),
developer_key: ed25519_dalek::SigningKey::generate(
&mut ssh_key::rand_core::OsRng::default(),
),
},
@@ -116,8 +119,11 @@ impl OsBackupV1 {
root_ca_key: self.root_ca_key.0,
root_ca_cert: self.root_ca_cert.0,
ssh_key: ssh_key::PrivateKey::from(Ed25519Keypair::from_seed(&self.net_key.0)),
tor_keys: vec![TorSecretKeyV3::from(ed25519_expand_key(&self.net_key.0))],
compat_s9pk_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
tor_keys: TorSecretKey::from_bytes(ed25519_expand_key(&self.net_key.0))
.ok()
.into_iter()
.collect(),
developer_key: ed25519_dalek::SigningKey::from_bytes(&self.net_key),
},
ui: self.ui,
}
@@ -134,7 +140,7 @@ struct OsBackupV2 {
root_ca_key: Pem<PKey<Private>>, // PEM Encoded OpenSSL Key
root_ca_cert: Pem<X509>, // PEM Encoded OpenSSL X509 Certificate
ssh_key: Pem<ssh_key::PrivateKey>, // PEM Encoded OpenSSH Key
tor_keys: Vec<TorSecretKeyV3>, // Base64 Encoded Ed25519 Expanded Secret Key
tor_keys: Vec<TorSecretKey>, // Base64 Encoded Ed25519 Expanded Secret Key
compat_s9pk_key: Pem<ed25519_dalek::SigningKey>, // PEM Encoded ED25519 Key
ui: Value, // JSON Value
}
@@ -149,7 +155,7 @@ impl OsBackupV2 {
root_ca_cert: self.root_ca_cert.0,
ssh_key: self.ssh_key.0,
tor_keys: self.tor_keys,
compat_s9pk_key: self.compat_s9pk_key.0,
developer_key: self.compat_s9pk_key.0,
},
ui: self.ui,
}
@@ -162,7 +168,7 @@ impl OsBackupV2 {
root_ca_cert: Pem(backup.account.root_ca_cert.clone()),
ssh_key: Pem(backup.account.ssh_key.clone()),
tor_keys: backup.account.tor_keys.clone(),
compat_s9pk_key: Pem(backup.account.compat_s9pk_key.clone()),
compat_s9pk_key: Pem(backup.account.developer_key.clone()),
ui: backup.ui.clone(),
}
}

View File

@@ -2,7 +2,7 @@ use std::collections::BTreeMap;
use std::sync::Arc;
use clap::Parser;
use futures::{stream, StreamExt};
use futures::{StreamExt, stream};
use models::PackageId;
use patch_db::json_ptr::ROOT;
use serde::{Deserialize, Serialize};
@@ -11,6 +11,7 @@ use tracing::instrument;
use ts_rs::TS;
use super::target::BackupTargetId;
use crate::PLATFORM;
use crate::backup::os::OsBackup;
use crate::context::setup::SetupResult;
use crate::context::{RpcContext, SetupContext};
@@ -20,9 +21,11 @@ use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::init::init;
use crate::prelude::*;
use crate::progress::ProgressUnits;
use crate::s9pk::S9pk;
use crate::service::service_map::DownloadInstallFuture;
use crate::setup::SetupExecuteProgress;
use crate::system::sync_kiosk;
use crate::util::serde::IoFormat;
#[derive(Deserialize, Serialize, Parser, TS)]
@@ -80,6 +83,7 @@ pub async fn recover_full_embassy(
recovery_source: TmpMountGuard,
server_id: &str,
recovery_password: &str,
kiosk: Option<bool>,
SetupExecuteProgress {
init_phases,
restore_phase,
@@ -105,8 +109,12 @@ pub async fn recover_full_embassy(
)
.with_kind(ErrorKind::PasswordHashGeneration)?;
let kiosk = Some(kiosk.unwrap_or(true)).filter(|_| &*PLATFORM != "raspberrypi");
sync_kiosk(kiosk).await?;
let db = ctx.db().await?;
db.put(&ROOT, &Database::init(&os_backup.account)?).await?;
db.put(&ROOT, &Database::init(&os_backup.account, kiosk)?)
.await?;
drop(db);
let init_result = init(&ctx.webserver, &ctx.config, init_phases).await?;
@@ -129,6 +137,7 @@ pub async fn recover_full_embassy(
.collect();
let tasks = restore_packages(&rpc_ctx, backup_guard, ids).await?;
restore_phase.set_total(tasks.len() as u64);
restore_phase.set_units(Some(ProgressUnits::Steps));
let restore_phase = Arc::new(Mutex::new(restore_phase));
stream::iter(tasks)
.for_each_concurrent(5, |(id, res)| {
@@ -166,6 +175,7 @@ async fn restore_packages(
.install(
ctx.clone(),
|| S9pk::open(s9pk_path, Some(&id)),
None, // TODO: pull from metadata?
Some(backup_dir),
None,
)

View File

@@ -4,17 +4,17 @@ use std::path::{Path, PathBuf};
use clap::Parser;
use color_eyre::eyre::eyre;
use imbl_value::InternedString;
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler};
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use super::{BackupTarget, BackupTargetId};
use crate::context::{CliContext, RpcContext};
use crate::db::model::DatabaseModel;
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::filesystem::ReadOnly;
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::{recovery_info, StartOsRecoveryInfo};
use crate::disk::util::{StartOsRecoveryInfo, recovery_info};
use crate::prelude::*;
use crate::util::serde::KeyVal;
@@ -36,7 +36,7 @@ impl Map for CifsTargets {
}
}
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
pub struct CifsBackupTarget {
hostname: String,

View File

@@ -2,15 +2,15 @@ use std::collections::BTreeMap;
use std::path::{Path, PathBuf};
use chrono::{DateTime, Utc};
use clap::builder::ValueParserFactory;
use clap::Parser;
use clap::builder::ValueParserFactory;
use color_eyre::eyre::eyre;
use digest::generic_array::GenericArray;
use digest::OutputSizeUser;
use digest::generic_array::GenericArray;
use exver::Version;
use imbl_value::InternedString;
use models::{FromStrParser, PackageId};
use rpc_toolkit::{from_fn_async, Context, HandlerExt, ParentHandler};
use rpc_toolkit::{Context, HandlerExt, ParentHandler, from_fn_async};
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use tokio::sync::Mutex;
@@ -27,18 +27,18 @@ use crate::disk::mount::filesystem::{FileSystem, MountType, ReadWrite};
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::PartitionInfo;
use crate::prelude::*;
use crate::util::serde::{
deserialize_from_str, display_serializable, serialize_display, HandlerExtSerde, WithIoFormat,
};
use crate::util::VersionString;
use crate::util::serde::{
HandlerExtSerde, WithIoFormat, deserialize_from_str, display_serializable, serialize_display,
};
pub mod cifs;
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Deserialize, Serialize, TS)]
#[serde(tag = "type")]
#[serde(rename_all = "camelCase")]
#[serde(rename_all_fields = "camelCase")]
pub enum BackupTarget {
#[serde(rename_all = "camelCase")]
Disk {
vendor: Option<String>,
model: Option<String>,
@@ -157,7 +157,7 @@ pub fn target<C: Context>() -> ParentHandler<C> {
from_fn_async(info)
.with_display_serializable()
.with_custom_display_fn::<CliContext, _>(|params, info| {
Ok(display_backup_info(params.params, info))
display_backup_info(params.params, info)
})
.with_about("Display package backup information")
.with_call_remote::<CliContext>(),
@@ -210,24 +210,26 @@ pub async fn list(ctx: RpcContext) -> Result<BTreeMap<BackupTargetId, BackupTarg
.collect())
}
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
#[derive(Clone, Debug, Default, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
pub struct BackupInfo {
#[ts(type = "string")]
pub version: Version,
pub timestamp: Option<DateTime<Utc>>,
pub package_backups: BTreeMap<PackageId, PackageBackupInfo>,
}
#[derive(Clone, Debug, Deserialize, Serialize)]
#[derive(Clone, Debug, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
pub struct PackageBackupInfo {
pub title: InternedString,
pub version: VersionString,
#[ts(type = "string")]
pub os_version: Version,
pub timestamp: DateTime<Utc>,
}
fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) {
fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) -> Result<(), Error> {
use prettytable::*;
if let Some(format) = params.format {
@@ -260,7 +262,8 @@ fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) {
];
table.add_row(row);
}
table.print_tty(false).unwrap();
table.print_tty(false)?;
Ok(())
}
#[derive(Deserialize, Serialize, Parser, TS)]
@@ -296,7 +299,7 @@ pub async fn info(
}
lazy_static::lazy_static! {
static ref USER_MOUNTS: Mutex<BTreeMap<BackupTargetId, BackupMountGuard<TmpMountGuard>>> =
static ref USER_MOUNTS: Mutex<BTreeMap<BackupTargetId, Result<BackupMountGuard<TmpMountGuard>, TmpMountGuard>>> =
Mutex::new(BTreeMap::new());
}
@@ -305,8 +308,11 @@ lazy_static::lazy_static! {
#[command(rename_all = "kebab-case")]
pub struct MountParams {
target_id: BackupTargetId,
server_id: String,
#[arg(long)]
server_id: Option<String>,
password: String,
#[arg(long)]
allow_partial: bool,
}
#[instrument(skip_all)]
@@ -316,24 +322,63 @@ pub async fn mount(
target_id,
server_id,
password,
allow_partial,
}: MountParams,
) -> Result<String, Error> {
let server_id = if let Some(server_id) = server_id {
server_id
} else {
ctx.db
.peek()
.await
.into_public()
.into_server_info()
.into_id()
.de()?
};
let mut mounts = USER_MOUNTS.lock().await;
if let Some(existing) = mounts.get(&target_id) {
return Ok(existing.path().display().to_string());
}
let existing = mounts.get(&target_id);
let guard = BackupMountGuard::mount(
TmpMountGuard::mount(&target_id.clone().load(&ctx.db.peek().await)?, ReadWrite).await?,
&server_id,
&password,
)
.await?;
let base = match existing {
Some(Ok(a)) => return Ok(a.path().display().to_string()),
Some(Err(e)) => e.clone(),
None => {
TmpMountGuard::mount(&target_id.clone().load(&ctx.db.peek().await)?, ReadWrite).await?
}
};
let guard = match BackupMountGuard::mount(base.clone(), &server_id, &password).await {
Ok(a) => a,
Err(e) => {
if allow_partial {
mounts.insert(target_id, Err(base.clone()));
let enc_key = BackupMountGuard::<TmpMountGuard>::load_metadata(
base.path(),
&server_id,
&password,
)
.await
.map(|(_, k)| k);
return Err(e)
.with_ctx(|e| (
e.kind,
format!(
"\nThe base filesystem did successfully mount at {:?}\nWrapped Key: {:?}",
base.path(),
enc_key
)
));
} else {
return Err(e);
}
}
};
let res = guard.path().display().to_string();
mounts.insert(target_id, guard);
mounts.insert(target_id, Ok(guard));
Ok(res)
}
@@ -350,11 +395,17 @@ pub async fn umount(_: RpcContext, UmountParams { target_id }: UmountParams) ->
let mut mounts = USER_MOUNTS.lock().await; // TODO: move to context
if let Some(target_id) = target_id {
if let Some(existing) = mounts.remove(&target_id) {
existing.unmount().await?;
match existing {
Ok(e) => e.unmount().await?,
Err(e) => e.unmount().await?,
}
}
} else {
for (_, existing) in std::mem::take(&mut *mounts) {
existing.unmount().await?;
match existing {
Ok(e) => e.unmount().await?,
Err(e) => e.unmount().await?,
}
}
}

View File

@@ -2,41 +2,64 @@ use std::collections::VecDeque;
use std::ffi::OsString;
use std::path::Path;
#[cfg(feature = "container-runtime")]
#[cfg(feature = "cli-container")]
pub mod container_cli;
pub mod deprecated;
#[cfg(feature = "registry")]
#[cfg(any(feature = "registry", feature = "cli-registry"))]
pub mod registry;
#[cfg(feature = "cli")]
pub mod start_cli;
#[cfg(feature = "daemon")]
#[cfg(feature = "startd")]
pub mod start_init;
#[cfg(feature = "daemon")]
#[cfg(feature = "startd")]
pub mod startd;
#[cfg(any(feature = "tunnel", feature = "cli-tunnel"))]
pub mod tunnel;
fn select_executable(name: &str) -> Option<fn(VecDeque<OsString>)> {
match name {
#[cfg(feature = "cli")]
"start-cli" => Some(start_cli::main),
#[cfg(feature = "container-runtime")]
"start-cli" => Some(container_cli::main),
#[cfg(feature = "daemon")]
#[cfg(feature = "startd")]
"startd" => Some(startd::main),
#[cfg(feature = "registry")]
"registry" => Some(registry::main),
"embassy-cli" => Some(|_| deprecated::renamed("embassy-cli", "start-cli")),
"embassy-sdk" => Some(|_| deprecated::renamed("embassy-sdk", "start-sdk")),
#[cfg(feature = "startd")]
"embassyd" => Some(|_| deprecated::renamed("embassyd", "startd")),
#[cfg(feature = "startd")]
"embassy-init" => Some(|_| deprecated::removed("embassy-init")),
#[cfg(feature = "cli-startd")]
"start-cli" => Some(start_cli::main),
#[cfg(feature = "cli-startd")]
"embassy-cli" => Some(|_| deprecated::renamed("embassy-cli", "start-cli")),
#[cfg(feature = "cli-startd")]
"embassy-sdk" => Some(|_| deprecated::removed("embassy-sdk")),
#[cfg(feature = "cli-container")]
"start-container" => Some(container_cli::main),
#[cfg(feature = "registry")]
"start-registryd" => Some(registry::main),
#[cfg(feature = "cli-registry")]
"start-registry" => Some(registry::cli),
#[cfg(feature = "tunnel")]
"start-tunneld" => Some(tunnel::main),
#[cfg(feature = "cli-tunnel")]
"start-tunnel" => Some(tunnel::cli),
"contents" => Some(|_| {
#[cfg(feature = "cli")]
println!("start-cli");
#[cfg(feature = "container-runtime")]
println!("start-cli (container)");
#[cfg(feature = "daemon")]
#[cfg(feature = "startd")]
println!("startd");
#[cfg(feature = "cli-startd")]
println!("start-cli");
#[cfg(feature = "cli-container")]
println!("start-container");
#[cfg(feature = "registry")]
println!("registry");
println!("start-registryd");
#[cfg(feature = "cli-registry")]
println!("start-registry");
#[cfg(feature = "tunnel")]
println!("start-tunneld");
#[cfg(feature = "cli-tunnel")]
println!("start-tunnel");
}),
_ => None,
}

View File

@@ -2,20 +2,26 @@ use std::ffi::OsString;
use clap::Parser;
use futures::FutureExt;
use rpc_toolkit::CliApp;
use tokio::signal::unix::signal;
use tracing::instrument;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::net::web_server::{Acceptor, WebServer};
use crate::prelude::*;
use crate::registry::context::{RegistryConfig, RegistryContext};
use crate::registry::registry_router;
use crate::util::logger::LOGGER;
#[instrument(skip_all)]
async fn inner_main(config: &RegistryConfig) -> Result<(), Error> {
let server = async {
let ctx = RegistryContext::init(config).await?;
let mut server = WebServer::new(Acceptor::bind([ctx.listen]).await?);
server.serve_registry(ctx.clone());
let server = WebServer::new(
Acceptor::bind([ctx.listen]).await?,
registry_router(ctx.clone()),
);
let mut shutdown_recv = ctx.shutdown.subscribe();
@@ -85,3 +91,30 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
}
}
}
pub fn cli(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::registry::registry_api(),
)
.run(args)
{
match e.data {
Some(serde_json::Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(serde_json::Value::Object(o)) => {
if let Some(serde_json::Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(serde_json::Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
}

View File

@@ -3,8 +3,8 @@ use std::ffi::OsString;
use rpc_toolkit::CliApp;
use serde_json::Value;
use crate::context::config::ClientConfig;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::util::logger::LOGGER;
use crate::version::{Current, VersionT};
@@ -17,7 +17,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::expanded_api(),
crate::main_api(),
)
.run(args)
{

View File

@@ -1,4 +1,3 @@
use std::path::Path;
use std::sync::Arc;
use tokio::process::Command;
@@ -7,12 +6,13 @@ use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext};
use crate::disk::REPAIR_DISK_PATH;
use crate::disk::fsck::RepairStrategy;
use crate::disk::main::DEFAULT_PASSWORD;
use crate::disk::REPAIR_DISK_PATH;
use crate::firmware::{check_for_firmware_update, update_firmware};
use crate::init::{InitPhases, STANDBY_MODE_PATH};
use crate::net::web_server::{UpgradableListener, WebServer};
use crate::net::gateway::UpgradableListener;
use crate::net::web_server::WebServer;
use crate::prelude::*;
use crate::progress::FullProgressTracker;
use crate::shutdown::Shutdown;
@@ -38,7 +38,7 @@ async fn setup_or_init(
let mut update_phase = handle.add_phase("Updating Firmware".into(), Some(10));
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1));
server.serve_init(init_ctx);
server.serve_ui_for(init_ctx);
update_phase.start();
if let Err(e) = update_firmware(firmware).await {
@@ -48,7 +48,7 @@ async fn setup_or_init(
update_phase.complete();
reboot_phase.start();
return Ok(Err(Shutdown {
export_args: None,
disk_guid: None,
restart: true,
}));
}
@@ -94,7 +94,7 @@ async fn setup_or_init(
let ctx = InstallContext::init().await?;
server.serve_install(ctx.clone());
server.serve_ui_for(ctx.clone());
ctx.shutdown
.subscribe()
@@ -103,7 +103,7 @@ async fn setup_or_init(
.expect("context dropped");
return Ok(Err(Shutdown {
export_args: None,
disk_guid: None,
restart: true,
}));
}
@@ -114,10 +114,12 @@ async fn setup_or_init(
{
let ctx = SetupContext::init(server, config)?;
server.serve_setup(ctx.clone());
server.serve_ui_for(ctx.clone());
let mut shutdown = ctx.shutdown.subscribe();
shutdown.recv().await.expect("context dropped");
if let Some(shutdown) = shutdown.recv().await.expect("context dropped") {
return Ok(Err(shutdown));
}
tokio::task::yield_now().await;
if let Err(e) = Command::new("killall")
@@ -136,7 +138,7 @@ async fn setup_or_init(
return Err(Error::new(
eyre!("Setup mode exited before setup completed"),
ErrorKind::Unknown,
))
));
}
}))
} else {
@@ -148,7 +150,7 @@ async fn setup_or_init(
let init_phases = InitPhases::new(&handle);
let rpc_ctx_phases = InitRpcContextPhases::new(&handle);
server.serve_init(init_ctx);
server.serve_ui_for(init_ctx);
async {
disk_phase.start();
@@ -183,7 +185,7 @@ async fn setup_or_init(
let mut reboot_phase = handle.add_phase("Rebooting".into(), Some(1));
reboot_phase.start();
return Ok(Err(Shutdown {
export_args: Some((disk_guid, Path::new(DATA_DIR).to_owned())),
disk_guid: Some(disk_guid),
restart: true,
}));
}
@@ -246,7 +248,7 @@ pub async fn main(
e,
)?;
server.serve_diagnostic(ctx.clone());
server.serve_ui_for(ctx.clone());
let shutdown = ctx.shutdown.subscribe().recv().await.unwrap();

View File

@@ -1,6 +1,5 @@
use std::cmp::max;
use std::ffi::OsString;
use std::net::IpAddr;
use std::sync::Arc;
use std::time::Duration;
@@ -13,9 +12,9 @@ use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::context::rpc::InitRpcContextPhases;
use crate::context::{DiagnosticContext, InitContext, RpcContext};
use crate::net::network_interface::SelfContainedNetworkInterfaceListener;
use crate::net::utils::ipv6_is_local;
use crate::net::web_server::{Acceptor, UpgradableListener, WebServer};
use crate::net::gateway::{BindTcp, SelfContainedNetworkInterfaceListener, UpgradableListener};
use crate::net::static_server::refresher;
use crate::net::web_server::{Acceptor, WebServer};
use crate::shutdown::Shutdown;
use crate::system::launch_metrics_task;
use crate::util::io::append_file;
@@ -40,7 +39,7 @@ async fn inner_main(
};
tokio::fs::write("/run/startos/initialized", "").await?;
server.serve_main(ctx.clone());
server.serve_ui_for(ctx.clone());
LOGGER.set_logfile(None);
handle.complete();
@@ -49,7 +48,7 @@ async fn inner_main(
let init_ctx = InitContext::init(config).await?;
let handle = init_ctx.progress.clone();
let rpc_ctx_phases = InitRpcContextPhases::new(&handle);
server.serve_init(init_ctx);
server.serve_ui_for(init_ctx);
let ctx = RpcContext::init(
&server.acceptor_setter(),
@@ -65,14 +64,14 @@ async fn inner_main(
)
.await?;
server.serve_main(ctx.clone());
server.serve_ui_for(ctx.clone());
handle.complete();
ctx
};
let (rpc_ctx, shutdown) = async {
crate::hostname::sync_hostname(&rpc_ctx.account.read().await.hostname).await?;
crate::hostname::sync_hostname(&rpc_ctx.account.peek(|a| a.hostname.clone())).await?;
let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
@@ -134,8 +133,6 @@ async fn inner_main(
.await?;
rpc_ctx.shutdown().await?;
tracing::info!("RPC Context is dropped");
Ok(shutdown)
}
@@ -146,14 +143,15 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.worker_threads(max(4, num_cpus::get()))
.worker_threads(max(1, num_cpus::get()))
.enable_all()
.build()
.expect("failed to initialize runtime");
let res = rt.block_on(async {
let mut server = WebServer::new(Acceptor::bind_upgradable(
SelfContainedNetworkInterfaceListener::bind(80),
));
let mut server = WebServer::new(
Acceptor::bind_upgradable(SelfContainedNetworkInterfaceListener::bind(BindTcp, 80)),
refresher(),
);
match inner_main(&mut server, &config).await {
Ok(a) => {
server.shutdown().await;
@@ -181,7 +179,7 @@ pub fn main(args: impl IntoIterator<Item = OsString>) {
e,
)?;
server.serve_diagnostic(ctx.clone());
server.serve_ui_for(ctx.clone());
let mut shutdown = ctx.shutdown.subscribe();

View File

@@ -0,0 +1,200 @@
use std::ffi::OsString;
use std::net::SocketAddr;
use std::sync::Arc;
use std::time::Duration;
use clap::Parser;
use futures::FutureExt;
use helpers::NonDetachingJoinHandle;
use rpc_toolkit::CliApp;
use tokio::signal::unix::signal;
use tracing::instrument;
use visit_rs::Visit;
use crate::context::CliContext;
use crate::context::config::ClientConfig;
use crate::net::gateway::{Bind, BindTcp};
use crate::net::tls::TlsListener;
use crate::net::web_server::{Accept, Acceptor, MetadataVisitor, WebServer};
use crate::prelude::*;
use crate::tunnel::context::{TunnelConfig, TunnelContext};
use crate::tunnel::tunnel_router;
use crate::tunnel::web::TunnelCertHandler;
use crate::util::logger::LOGGER;
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
enum WebserverListener {
Http,
Https(SocketAddr),
}
impl<V: MetadataVisitor> Visit<V> for WebserverListener {
fn visit(&self, visitor: &mut V) -> <V as visit_rs::Visitor>::Result {
visitor.visit(self)
}
}
#[instrument(skip_all)]
async fn inner_main(config: &TunnelConfig) -> Result<(), Error> {
let server = async {
let ctx = TunnelContext::init(config).await?;
let listen = ctx.listen;
let server = WebServer::new(
Acceptor::bind_map_dyn([(WebserverListener::Http, listen)]).await?,
tunnel_router(ctx.clone()),
);
let acceptor_setter = server.acceptor_setter();
let https_db = ctx.db.clone();
let https_thread: NonDetachingJoinHandle<()> = tokio::spawn(async move {
let mut sub = https_db.subscribe("/webserver".parse().unwrap()).await;
while {
while let Err(e) = async {
let webserver = https_db.peek().await.into_webserver();
if webserver.as_enabled().de()? {
let addr = webserver.as_listen().de()?.or_not_found("listen address")?;
acceptor_setter.send_if_modified(|a| {
let key = WebserverListener::Https(addr);
if !a.contains_key(&key) {
match (|| {
Ok::<_, Error>(TlsListener::new(
BindTcp.bind(addr)?,
TunnelCertHandler {
db: https_db.clone(),
crypto_provider: Arc::new(tokio_rustls::rustls::crypto::ring::default_provider()),
},
))
})() {
Ok(l) => {
a.retain(|k, _| *k == WebserverListener::Http);
a.insert(key, l.into_dyn());
true
}
Err(e) => {
tracing::error!("error adding ssl listener: {e}");
tracing::debug!("{e:?}");
false
}
}
} else {
false
}
});
} else {
acceptor_setter.send_if_modified(|a| {
let before = a.len();
a.retain(|k, _| *k == WebserverListener::Http);
a.len() != before
});
}
Ok::<_, Error>(())
}
.await
{
tracing::error!("error updating webserver bind: {e}");
tracing::debug!("{e:?}");
tokio::time::sleep(Duration::from_secs(5)).await;
}
sub.recv().await.is_some()
} {}
})
.into();
let mut shutdown_recv = ctx.shutdown.subscribe();
let sig_handler_ctx = ctx;
let sig_handler: NonDetachingJoinHandle<()> = tokio::spawn(async move {
use tokio::signal::unix::SignalKind;
futures::future::select_all(
[
SignalKind::interrupt(),
SignalKind::quit(),
SignalKind::terminate(),
]
.iter()
.map(|s| {
async move {
signal(*s)
.unwrap_or_else(|_| panic!("register {:?} handler", s))
.recv()
.await
}
.boxed()
}),
)
.await;
sig_handler_ctx
.shutdown
.send(())
.map_err(|_| ())
.expect("send shutdown signal");
})
.into();
shutdown_recv
.recv()
.await
.with_kind(crate::ErrorKind::Unknown)?;
sig_handler.wait_for_abort().await.with_kind(ErrorKind::Unknown)?;
https_thread.wait_for_abort().await.with_kind(ErrorKind::Unknown)?;
Ok::<_, Error>(server)
}
.await?;
server.shutdown().await;
Ok(())
}
pub fn main(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
let config = TunnelConfig::parse_from(args).load().unwrap();
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("failed to initialize runtime");
rt.block_on(inner_main(&config))
};
match res {
Ok(()) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
}
}
pub fn cli(args: impl IntoIterator<Item = OsString>) {
LOGGER.enable();
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::tunnel::api::tunnel_api(),
)
.run(args)
{
match e.data {
Some(serde_json::Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(serde_json::Value::Object(o)) => {
if let Some(serde_json::Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(serde_json::Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
}

View File

@@ -0,0 +1,53 @@
use helpers::Callback;
use itertools::Itertools;
use jsonpath_lib::Compiled;
use models::PackageId;
use serde_json::Value;
use crate::context::RpcContext;
pub struct ConfigHook {
pub path: Compiled,
pub prev: Vec<Value>,
pub callback: Callback,
}
impl RpcContext {
pub async fn add_config_hook(&self, id: PackageId, hook: ConfigHook) {
let mut hooks = self.config_hooks.lock().await;
let prev = hooks.remove(&id).unwrap_or_default();
hooks.insert(
id,
prev.into_iter()
.filter(|h| h.callback.is_listening())
.chain(std::iter::once(hook))
.collect(),
);
}
pub async fn call_config_hooks(&self, id: PackageId, config: &Value) {
let mut hooks = self.config_hooks.lock().await;
let mut prev = hooks.remove(&id).unwrap_or_default();
for hook in &mut prev {
let new = hook
.path
.select(config)
.unwrap_or_default()
.into_iter()
.cloned()
.collect_vec();
if new != hook.prev {
hook.callback
.call(vec![Value::Array(new.clone())])
.unwrap_or_default();
hook.prev = new;
}
}
hooks.insert(
id,
prev.into_iter()
.filter(|h| h.callback.is_listening())
.collect(),
);
}
}

View File

@@ -1,27 +1,33 @@
use std::fs::File;
use std::io::BufReader;
use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use cookie_store::{CookieStore, RawCookie};
use cookie::{Cookie, Expiration, SameSite};
use cookie_store::CookieStore;
use http::HeaderMap;
use imbl_value::InternedString;
use josekit::jwk::Jwk;
use once_cell::sync::OnceCell;
use reqwest::Proxy;
use reqwest_cookie_store::CookieStoreMutex;
use rpc_toolkit::reqwest::{Client, Url};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{call_remote_http, CallRemote, Context, Empty};
use rpc_toolkit::{CallRemote, Context, Empty};
use tokio::net::TcpStream;
use tokio::runtime::Runtime;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream};
use tracing::instrument;
use super::setup::CURRENT_SECRET;
use crate::context::config::{local_config_path, ClientConfig};
use crate::context::config::{ClientConfig, local_config_path};
use crate::context::{DiagnosticContext, InitContext, InstallContext, RpcContext, SetupContext};
use crate::middleware::auth::LOCAL_AUTH_COOKIE_PATH;
use crate::developer::{OS_DEVELOPER_KEY_PATH, default_developer_key_path};
use crate::middleware::auth::AuthContext;
use crate::prelude::*;
use crate::rpc_continuations::Guid;
use crate::util::io::read_file_to_string;
#[derive(Debug)]
pub struct CliContextSeed {
@@ -29,6 +35,10 @@ pub struct CliContextSeed {
pub base_url: Url,
pub rpc_url: Url,
pub registry_url: Option<Url>,
pub registry_hostname: Option<InternedString>,
pub registry_listen: Option<SocketAddr>,
pub tunnel_addr: Option<SocketAddr>,
pub tunnel_listen: Option<SocketAddr>,
pub client: Client,
pub cookie_store: Arc<CookieStoreMutex>,
pub cookie_path: PathBuf,
@@ -37,6 +47,11 @@ pub struct CliContextSeed {
}
impl Drop for CliContextSeed {
fn drop(&mut self) {
if let Some(rt) = self.runtime.take() {
if let Ok(rt) = Arc::try_unwrap(rt) {
rt.shutdown_background();
}
}
let tmp = format!("{}.tmp", self.cookie_path.display());
let parent_dir = self.cookie_path.parent().unwrap_or(Path::new("/"));
if !parent_dir.exists() {
@@ -50,9 +65,8 @@ impl Drop for CliContextSeed {
true,
)
.unwrap();
let mut store = self.cookie_store.lock().unwrap();
store.remove("localhost", "", "local");
store.save_json(&mut *writer).unwrap();
let store = self.cookie_store.lock().unwrap();
cookie_store::serde::json::save(&store, &mut *writer).unwrap();
writer.sync_all().unwrap();
std::fs::rename(tmp, &self.cookie_path).unwrap();
}
@@ -80,26 +94,14 @@ impl CliContext {
.unwrap_or(Path::new("/"))
.join(".cookies.json")
});
let cookie_store = Arc::new(CookieStoreMutex::new({
let mut store = if cookie_path.exists() {
CookieStore::load_json(BufReader::new(
File::open(&cookie_path)
.with_ctx(|_| (ErrorKind::Filesystem, cookie_path.display()))?,
))
.map_err(|e| eyre!("{}", e))
.with_kind(crate::ErrorKind::Deserialization)?
} else {
CookieStore::default()
};
if let Ok(local) = std::fs::read_to_string(LOCAL_AUTH_COOKIE_PATH) {
store
.insert_raw(
&RawCookie::new("local", local),
&"http://localhost".parse()?,
)
.with_kind(crate::ErrorKind::Network)?;
}
store
let cookie_store = Arc::new(CookieStoreMutex::new(if cookie_path.exists() {
cookie_store::serde::json::load(BufReader::new(
File::open(&cookie_path)
.with_ctx(|_| (ErrorKind::Filesystem, cookie_path.display()))?,
))
.unwrap_or_default()
} else {
CookieStore::default()
}));
Ok(CliContext(Arc::new(CliContextSeed {
@@ -124,9 +126,17 @@ impl CliContext {
Ok::<_, Error>(registry)
})
.transpose()?,
registry_hostname: config.registry_hostname,
registry_listen: config.registry_listen,
tunnel_addr: config.tunnel,
tunnel_listen: config.tunnel_listen,
client: {
let mut builder = Client::builder().cookie_provider(cookie_store.clone());
if let Some(proxy) = config.proxy {
if let Some(proxy) = config.proxy.or_else(|| {
config
.socks_listen
.and_then(|socks| format!("socks5h://{socks}").parse::<Url>().log_err())
}) {
builder =
builder.proxy(Proxy::all(proxy).with_kind(crate::ErrorKind::ParseUrl)?)
}
@@ -134,14 +144,9 @@ impl CliContext {
},
cookie_store,
cookie_path,
developer_key_path: config.developer_key_path.unwrap_or_else(|| {
local_config_path()
.as_deref()
.unwrap_or_else(|| Path::new(super::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join("developer.key.pem")
}),
developer_key_path: config
.developer_key_path
.unwrap_or_else(default_developer_key_path),
developer_key: OnceCell::new(),
})))
}
@@ -150,20 +155,26 @@ impl CliContext {
#[instrument(skip_all)]
pub fn developer_key(&self) -> Result<&ed25519_dalek::SigningKey, Error> {
self.developer_key.get_or_try_init(|| {
if !self.developer_key_path.exists() {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `start-cli init` before running this command."), crate::ErrorKind::Uninitialized));
}
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
&std::fs::read_to_string(&self.developer_key_path)?,
)
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
for path in [Path::new(OS_DEVELOPER_KEY_PATH), &self.developer_key_path] {
if !path.exists() {
continue;
}
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
&std::fs::read_to_string(path)?,
)
})?;
Ok(secret.into())
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
)
})?;
return Ok(secret.into())
}
Err(Error::new(
eyre!("Developer Key does not exist! Please run `start-cli init-key` before running this command."),
crate::ErrorKind::Uninitialized
))
})
}
@@ -180,7 +191,7 @@ impl CliContext {
eyre!("Cannot parse scheme from base URL"),
crate::ErrorKind::ParseUrl,
)
.into())
.into());
}
};
url.set_scheme(ws_scheme)
@@ -223,23 +234,28 @@ impl CliContext {
&self,
method: &str,
params: Value,
) -> Result<Value, RpcError>
) -> Result<Value, Error>
where
Self: CallRemote<RemoteContext>,
{
<Self as CallRemote<RemoteContext, Empty>>::call_remote(&self, method, params, Empty {})
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
}
pub async fn call_remote_with<RemoteContext, T>(
&self,
method: &str,
params: Value,
extra: T,
) -> Result<Value, RpcError>
) -> Result<Value, Error>
where
Self: CallRemote<RemoteContext, T>,
{
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra)
.await
.map_err(Error::from)
.with_ctx(|e| (e.kind, method))
}
}
impl AsRef<Jwk> for CliContext {
@@ -269,40 +285,88 @@ impl Context for CliContext {
)
}
}
impl AsRef<Client> for CliContext {
fn as_ref(&self) -> &Client {
&self.client
}
}
impl CallRemote<RpcContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
if let Ok(local) = read_file_to_string(RpcContext::LOCAL_AUTH_COOKIE_PATH).await {
self.cookie_store
.lock()
.unwrap()
.insert_raw(
&Cookie::build(("local", local))
.domain("localhost")
.expires(Expiration::Session)
.same_site(SameSite::Strict)
.build(),
&"http://localhost".parse()?,
)
.with_kind(crate::ErrorKind::Network)?;
}
crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
impl CallRemote<DiagnosticContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
impl CallRemote<InitContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
impl CallRemote<SetupContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
impl CallRemote<InstallContext> for CliContext {
async fn call_remote(&self, method: &str, params: Value, _: Empty) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
crate::middleware::signature::call_remote(
self,
self.rpc_url.clone(),
HeaderMap::new(),
self.rpc_url.host_str(),
method,
params,
)
.await
}
}
#[test]
fn test() {
let ctx = CliContext::init(ClientConfig::default()).unwrap();
ctx.runtime().unwrap().block_on(async {
reqwest::Client::new()
.get("http://example.com")
.send()
.await
.unwrap();
});
}

View File

@@ -3,18 +3,16 @@ use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use clap::Parser;
use imbl_value::InternedString;
use reqwest::Url;
use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use sqlx::postgres::PgConnectOptions;
use sqlx::PgPool;
use crate::MAIN_DATA;
use crate::disk::OsPartitionInfo;
use crate::init::init_postgres;
use crate::prelude::*;
use crate::util::serde::IoFormat;
use crate::version::VersionT;
use crate::MAIN_DATA;
pub const DEVICE_CONFIG_PATH: &str = "/media/startos/config/config.yaml"; // "/media/startos/config/config.yaml";
pub const CONFIG_PATH: &str = "/etc/startos/config.yaml";
@@ -58,7 +56,6 @@ pub trait ContextConfig: DeserializeOwned + Default {
#[derive(Debug, Default, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
#[command(name = "start-cli")]
#[command(version = crate::version::Current::default().semver().to_string())]
pub struct ClientConfig {
#[arg(short = 'c', long)]
@@ -67,8 +64,18 @@ pub struct ClientConfig {
pub host: Option<Url>,
#[arg(short = 'r', long)]
pub registry: Option<Url>,
#[arg(long)]
pub registry_hostname: Option<InternedString>,
#[arg(skip)]
pub registry_listen: Option<SocketAddr>,
#[arg(short = 't', long)]
pub tunnel: Option<SocketAddr>,
#[arg(skip)]
pub tunnel_listen: Option<SocketAddr>,
#[arg(short = 'p', long)]
pub proxy: Option<Url>,
#[arg(skip)]
pub socks_listen: Option<SocketAddr>,
#[arg(long)]
pub cookie_path: Option<PathBuf>,
#[arg(long)]
@@ -81,6 +88,8 @@ impl ContextConfig for ClientConfig {
fn merge_with(&mut self, other: Self) {
self.host = self.host.take().or(other.host);
self.registry = self.registry.take().or(other.registry);
self.registry_hostname = self.registry_hostname.take().or(other.registry_hostname);
self.tunnel = self.tunnel.take().or(other.tunnel);
self.proxy = self.proxy.take().or(other.proxy);
self.cookie_path = self.cookie_path.take().or(other.cookie_path);
self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path);
@@ -107,15 +116,15 @@ pub struct ServerConfig {
#[arg(skip)]
pub os_partitions: Option<OsPartitionInfo>,
#[arg(long)]
pub tor_control: Option<SocketAddr>,
#[arg(long)]
pub tor_socks: Option<SocketAddr>,
pub socks_listen: Option<SocketAddr>,
#[arg(long)]
pub revision_cache_size: Option<usize>,
#[arg(long)]
pub disable_encryption: Option<bool>,
#[arg(long)]
pub multi_arch_s9pks: Option<bool>,
#[arg(long)]
pub developer_key_path: Option<PathBuf>,
}
impl ContextConfig for ServerConfig {
fn next(&mut self) -> Option<PathBuf> {
@@ -124,14 +133,14 @@ impl ContextConfig for ServerConfig {
fn merge_with(&mut self, other: Self) {
self.ethernet_interface = self.ethernet_interface.take().or(other.ethernet_interface);
self.os_partitions = self.os_partitions.take().or(other.os_partitions);
self.tor_control = self.tor_control.take().or(other.tor_control);
self.tor_socks = self.tor_socks.take().or(other.tor_socks);
self.socks_listen = self.socks_listen.take().or(other.socks_listen);
self.revision_cache_size = self
.revision_cache_size
.take()
.or(other.revision_cache_size);
self.disable_encryption = self.disable_encryption.take().or(other.disable_encryption);
self.multi_arch_s9pks = self.multi_arch_s9pks.take().or(other.multi_arch_s9pks);
self.developer_key_path = self.developer_key_path.take().or(other.developer_key_path);
}
}
@@ -151,16 +160,4 @@ impl ServerConfig {
Ok(db)
}
#[instrument(skip_all)]
pub async fn secret_store(&self) -> Result<PgPool, Error> {
init_postgres("/media/startos/data").await?;
let secret_store =
PgPool::connect_with(PgConnectOptions::new().database("secrets").username("root"))
.await?;
sqlx::migrate!()
.run(&secret_store)
.await
.with_kind(crate::ErrorKind::Database)?;
Ok(secret_store)
}
}

View File

@@ -1,15 +1,15 @@
use std::ops::Deref;
use std::sync::Arc;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::Context;
use rpc_toolkit::yajrc::RpcError;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::Error;
use crate::context::config::ServerConfig;
use crate::rpc_continuations::RpcContinuations;
use crate::shutdown::Shutdown;
use crate::Error;
pub struct DiagnosticContextSeed {
pub shutdown: Sender<Shutdown>,

View File

@@ -6,10 +6,10 @@ use tokio::sync::broadcast::Sender;
use tokio::sync::watch;
use tracing::instrument;
use crate::Error;
use crate::context::config::ServerConfig;
use crate::progress::FullProgressTracker;
use crate::rpc_continuations::RpcContinuations;
use crate::Error;
pub struct InitContextSeed {
pub config: ServerConfig,
@@ -25,10 +25,12 @@ impl InitContext {
#[instrument(skip_all)]
pub async fn init(cfg: &ServerConfig) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
let mut progress = FullProgressTracker::new();
progress.enable_logging(true);
Ok(Self(Arc::new(InitContextSeed {
config: cfg.clone(),
error: watch::channel(None).0,
progress: FullProgressTracker::new(),
progress,
shutdown,
rpc_continuations: RpcContinuations::new(),
})))

View File

@@ -5,9 +5,9 @@ use rpc_toolkit::Context;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::Error;
use crate::net::utils::find_eth_iface;
use crate::rpc_continuations::RpcContinuations;
use crate::Error;
pub struct InstallContextSeed {
pub ethernet_interface: String,

View File

@@ -1,9 +1,10 @@
use std::collections::{BTreeMap, BTreeSet};
use std::ffi::OsStr;
use std::future::Future;
use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4};
use std::ops::Deref;
use std::sync::atomic::{AtomicBool, Ordering};
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use std::time::Duration;
use chrono::{TimeDelta, Utc};
@@ -16,31 +17,37 @@ use models::{ActionId, PackageId};
use reqwest::{Client, Proxy};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{CallRemote, Context, Empty};
use tokio::sync::{broadcast, watch, Mutex, RwLock};
use tokio::sync::{RwLock, broadcast, oneshot, watch};
use tokio::time::Instant;
use tracing::instrument;
use super::setup::CURRENT_SECRET;
use crate::DATA_DIR;
use crate::account::AccountInfo;
use crate::auth::Sessions;
use crate::context::config::ServerConfig;
use crate::db::model::Database;
use crate::db::model::package::TaskSeverity;
use crate::disk::OsPartitionInfo;
use crate::init::{check_time_is_synchronized, InitResult};
use crate::lxc::{ContainerId, LxcContainer, LxcManager};
use crate::init::{InitResult, check_time_is_synchronized};
use crate::install::PKG_ARCHIVE_DIR;
use crate::lxc::LxcManager;
use crate::net::gateway::UpgradableListener;
use crate::net::net_controller::{NetController, NetService};
use crate::net::socks::DEFAULT_SOCKS_LISTEN;
use crate::net::utils::{find_eth_iface, find_wifi_iface};
use crate::net::web_server::{UpgradableListener, WebServerAcceptorSetter};
use crate::net::web_server::WebServerAcceptorSetter;
use crate::net::wifi::WpaCli;
use crate::prelude::*;
use crate::progress::{FullProgressTracker, PhaseProgressTrackerHandle};
use crate::rpc_continuations::{Guid, OpenAuthedContinuations, RpcContinuations};
use crate::service::action::update_requested_actions;
use crate::service::effects::callbacks::ServiceCallbacks;
use crate::service::ServiceMap;
use crate::service::action::update_tasks;
use crate::service::effects::callbacks::ServiceCallbacks;
use crate::shutdown::Shutdown;
use crate::util::io::delete_file;
use crate::util::lshw::LshwDevice;
use crate::util::sync::SyncMutex;
use crate::util::sync::{SyncMutex, SyncRwLock, Watch};
pub struct RpcContextSeed {
is_closed: AtomicBool,
@@ -51,29 +58,28 @@ pub struct RpcContextSeed {
pub ephemeral_sessions: SyncMutex<Sessions>,
pub db: TypedPatchDb<Database>,
pub sync_db: watch::Sender<u64>,
pub account: RwLock<AccountInfo>,
pub account: SyncRwLock<AccountInfo>,
pub net_controller: Arc<NetController>,
pub os_net_service: NetService,
pub s9pk_arch: Option<&'static str>,
pub services: ServiceMap,
pub metrics_cache: RwLock<Option<crate::system::Metrics>>,
pub cancellable_installs: SyncMutex<BTreeMap<PackageId, oneshot::Sender<()>>>,
pub metrics_cache: Watch<Option<crate::system::Metrics>>,
pub shutdown: broadcast::Sender<Option<Shutdown>>,
pub tor_socks: SocketAddr,
pub lxc_manager: Arc<LxcManager>,
pub open_authed_continuations: OpenAuthedContinuations<Option<InternedString>>,
pub rpc_continuations: RpcContinuations,
pub callbacks: Arc<ServiceCallbacks>,
pub wifi_manager: Option<Arc<RwLock<WpaCli>>>,
pub wifi_manager: Arc<RwLock<Option<WpaCli>>>,
pub current_secret: Arc<Jwk>,
pub client: Client,
pub start_time: Instant,
pub crons: SyncMutex<BTreeMap<Guid, NonDetachingJoinHandle<()>>>,
// #[cfg(feature = "dev")]
pub dev: Dev,
}
pub struct Dev {
pub lxc: Mutex<BTreeMap<ContainerId, LxcContainer>>,
impl Drop for RpcContextSeed {
fn drop(&mut self) {
tracing::info!("RpcContext is dropped");
}
}
pub struct Hardware {
@@ -101,14 +107,16 @@ impl InitRpcContextPhases {
pub struct CleanupInitPhases {
cleanup_sessions: PhaseProgressTrackerHandle,
init_services: PhaseProgressTrackerHandle,
check_requested_actions: PhaseProgressTrackerHandle,
prune_s9pks: PhaseProgressTrackerHandle,
check_tasks: PhaseProgressTrackerHandle,
}
impl CleanupInitPhases {
pub fn new(handle: &FullProgressTracker) -> Self {
Self {
cleanup_sessions: handle.add_phase("Cleaning up sessions".into(), Some(1)),
init_services: handle.add_phase("Initializing services".into(), Some(10)),
check_requested_actions: handle.add_phase("Checking action requests".into(), Some(1)),
prune_s9pks: handle.add_phase("Pruning S9PKs".into(), Some(1)),
check_tasks: handle.add_phase("Checking action requests".into(), Some(1)),
}
}
}
@@ -129,10 +137,7 @@ impl RpcContext {
run_migrations,
}: InitRpcContextPhases,
) -> Result<Self, Error> {
let tor_proxy = config.tor_socks.unwrap_or(SocketAddr::V4(SocketAddrV4::new(
Ipv4Addr::new(127, 0, 0, 1),
9050,
)));
let socks_proxy = config.socks_listen.unwrap_or(DEFAULT_SOCKS_LISTEN);
let (shutdown, _) = tokio::sync::broadcast::channel(1);
load_db.start();
@@ -154,18 +159,9 @@ impl RpcContext {
{
(net_ctrl, os_net_service)
} else {
let net_ctrl = Arc::new(
NetController::init(
db.clone(),
config
.tor_control
.unwrap_or(SocketAddr::from(([127, 0, 0, 1], 9051))),
tor_proxy,
&account.hostname,
)
.await?,
);
webserver.try_upgrade(|a| net_ctrl.net_iface.upgrade_listener(a))?;
let net_ctrl =
Arc::new(NetController::init(db.clone(), &account.hostname, socks_proxy).await?);
webserver.try_upgrade(|a| net_ctrl.net_iface.watcher.upgrade_listener(a))?;
let os_net_service = net_ctrl.os_bindings().await?;
(net_ctrl, os_net_service)
};
@@ -173,8 +169,8 @@ impl RpcContext {
tracing::info!("Initialized Net Controller");
let services = ServiceMap::default();
let metrics_cache = RwLock::<Option<crate::system::Metrics>>::new(None);
let tor_proxy_url = format!("socks5h://{tor_proxy}");
let metrics_cache = Watch::<Option<crate::system::Metrics>>::new(None);
let socks_proxy_url = format!("socks5h://{socks_proxy}");
let crons = SyncMutex::new(BTreeMap::new());
@@ -229,7 +225,7 @@ impl RpcContext {
ephemeral_sessions: SyncMutex::new(Sessions::new()),
sync_db: watch::Sender::new(db.sequence().await),
db,
account: RwLock::new(account),
account: SyncRwLock::new(account),
callbacks: net_controller.callbacks.clone(),
net_controller,
os_net_service,
@@ -239,15 +235,13 @@ impl RpcContext {
Some(crate::ARCH)
},
services,
cancellable_installs: SyncMutex::new(BTreeMap::new()),
metrics_cache,
shutdown,
tor_socks: tor_proxy,
lxc_manager: Arc::new(LxcManager::new()),
open_authed_continuations: OpenAuthedContinuations::new(),
rpc_continuations: RpcContinuations::new(),
wifi_manager: wifi_interface
.clone()
.map(|i| Arc::new(RwLock::new(WpaCli::init(i)))),
wifi_manager: Arc::new(RwLock::new(wifi_interface.clone().map(|i| WpaCli::init(i)))),
current_secret: Arc::new(
Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| {
tracing::debug!("{:?}", e);
@@ -259,21 +253,11 @@ impl RpcContext {
})?,
),
client: Client::builder()
.proxy(Proxy::custom(move |url| {
if url.host_str().map_or(false, |h| h.ends_with(".onion")) {
Some(tor_proxy_url.clone())
} else {
None
}
}))
.proxy(Proxy::all(socks_proxy_url)?)
.build()
.with_kind(crate::ErrorKind::ParseUrl)?,
start_time: Instant::now(),
crons,
// #[cfg(feature = "dev")]
dev: Dev {
lxc: Mutex::new(BTreeMap::new()),
},
});
let res = Self(seed.clone());
@@ -290,7 +274,7 @@ impl RpcContext {
self.crons.mutate(|c| std::mem::take(c));
self.services.shutdown_all().await?;
self.is_closed.store(true, Ordering::SeqCst);
tracing::info!("RPC Context is shutdown");
tracing::info!("RpcContext is shutdown");
Ok(())
}
@@ -306,8 +290,9 @@ impl RpcContext {
&self,
CleanupInitPhases {
mut cleanup_sessions,
init_services,
mut check_requested_actions,
mut init_services,
mut prune_s9pks,
mut check_tasks,
}: CleanupInitPhases,
) -> Result<(), Error> {
cleanup_sessions.start();
@@ -365,39 +350,63 @@ impl RpcContext {
});
cleanup_sessions.complete();
self.services.init(&self, init_services).await?;
tracing::info!("Initialized Services");
init_services.start();
self.services.init(&self).await?;
init_services.complete();
// TODO
check_requested_actions.start();
prune_s9pks.start();
let peek = self.db.peek().await;
let keep = peek
.as_public()
.as_package_data()
.as_entries()?
.into_iter()
.map(|(_, pde)| pde.as_s9pk().de())
.collect::<Result<BTreeSet<PathBuf>, Error>>()?;
let installed_dir = &Path::new(DATA_DIR).join(PKG_ARCHIVE_DIR).join("installed");
if tokio::fs::metadata(&installed_dir).await.is_ok() {
let mut dir = tokio::fs::read_dir(&installed_dir)
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("dir {installed_dir:?}")))?;
while let Some(file) = dir
.next_entry()
.await
.with_ctx(|_| (ErrorKind::Filesystem, lazy_format!("dir {installed_dir:?}")))?
{
let path = file.path();
if path.extension() == Some(OsStr::new("s9pk")) && !keep.contains(&path) {
delete_file(path).await?;
}
}
}
prune_s9pks.complete();
check_tasks.start();
let mut action_input: OrdMap<PackageId, BTreeMap<ActionId, Value>> = OrdMap::new();
let requested_actions: BTreeSet<_> = peek
let tasks: BTreeSet<_> = peek
.as_public()
.as_package_data()
.as_entries()?
.into_iter()
.map(|(_, pde)| {
Ok(pde
.as_requested_actions()
.as_entries()?
.into_iter()
.map(|(_, r)| {
Ok::<_, Error>((
r.as_request().as_package_id().de()?,
r.as_request().as_action_id().de()?,
))
}))
Ok(pde.as_tasks().as_entries()?.into_iter().map(|(_, r)| {
Ok::<_, Error>((
r.as_task().as_package_id().de()?,
r.as_task().as_action_id().de()?,
))
}))
})
.flatten_ok()
.map(|a| a.and_then(|a| a))
.try_collect()?;
let procedure_id = Guid::new();
for (package_id, action_id) in requested_actions {
for (package_id, action_id) in tasks {
if let Some(service) = self.services.get(&package_id).await.as_ref() {
if let Some(input) = service
.get_action_input(procedure_id.clone(), action_id.clone())
.await?
.await
.log_err()
.flatten()
.and_then(|i| i.value)
{
action_input
@@ -407,28 +416,47 @@ impl RpcContext {
}
}
}
self.db
.mutate(|db| {
for (package_id, action_input) in &action_input {
for (action_id, input) in action_input {
for (_, pde) in db.as_public_mut().as_package_data_mut().as_entries_mut()? {
pde.as_requested_actions_mut().mutate(|requested_actions| {
Ok(update_requested_actions(
requested_actions,
package_id,
action_id,
input,
false,
))
})?;
for id in
self.db
.mutate::<Vec<PackageId>>(|db| {
for (package_id, action_input) in &action_input {
for (action_id, input) in action_input {
for (_, pde) in
db.as_public_mut().as_package_data_mut().as_entries_mut()?
{
pde.as_tasks_mut().mutate(|tasks| {
Ok(update_tasks(tasks, package_id, action_id, input, false))
})?;
}
}
}
}
Ok(())
})
.await
.result?;
check_requested_actions.complete();
db.as_public()
.as_package_data()
.as_entries()?
.into_iter()
.filter_map(|(id, pkg)| {
(|| {
if pkg.as_tasks().de()?.into_iter().any(|(_, t)| {
t.active && t.task.severity == TaskSeverity::Critical
}) {
Ok(Some(id))
} else {
Ok(None)
}
})()
.transpose()
})
.collect()
})
.await
.result?
{
let svc = self.services.get(&id).await;
if let Some(svc) = &*svc {
svc.stop(procedure_id.clone(), false).await?;
}
}
check_tasks.complete();
Ok(())
}
@@ -455,6 +483,11 @@ impl RpcContext {
<Self as CallRemote<RemoteContext, T>>::call_remote(&self, method, params, extra).await
}
}
impl AsRef<Client> for RpcContext {
fn as_ref(&self) -> &Client {
&self.client
}
}
impl AsRef<Jwk> for RpcContext {
fn as_ref(&self) -> &Jwk {
&CURRENT_SECRET

View File

@@ -10,23 +10,25 @@ use josekit::jwk::Jwk;
use patch_db::PatchDb;
use rpc_toolkit::Context;
use serde::{Deserialize, Serialize};
use tokio::sync::broadcast::Sender;
use tokio::sync::OnceCell;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use ts_rs::TS;
use crate::MAIN_DATA;
use crate::account::AccountInfo;
use crate::context::config::ServerConfig;
use crate::context::RpcContext;
use crate::context::config::ServerConfig;
use crate::disk::OsPartitionInfo;
use crate::hostname::Hostname;
use crate::net::web_server::{UpgradableListener, WebServer, WebServerAcceptorSetter};
use crate::net::gateway::UpgradableListener;
use crate::net::web_server::{WebServer, WebServerAcceptorSetter};
use crate::prelude::*;
use crate::progress::FullProgressTracker;
use crate::rpc_continuations::{Guid, RpcContinuation, RpcContinuations};
use crate::setup::SetupProgress;
use crate::shutdown::Shutdown;
use crate::util::net::WebSocketExt;
use crate::MAIN_DATA;
lazy_static::lazy_static! {
pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| {
@@ -38,7 +40,6 @@ lazy_static::lazy_static! {
#[derive(Debug, Clone, Deserialize, Serialize, TS)]
#[serde(rename_all = "camelCase")]
#[ts(export)]
pub struct SetupResult {
pub tor_addresses: Vec<String>,
#[ts(type = "string")]
@@ -54,7 +55,7 @@ impl TryFrom<&AccountInfo> for SetupResult {
tor_addresses: value
.tor_keys
.iter()
.map(|tor_key| format!("https://{}", tor_key.public().get_onion_address()))
.map(|tor_key| format!("https://{}", tor_key.onion_address()))
.collect(),
hostname: value.hostname.clone(),
lan_address: value.hostname.lan_address(),
@@ -71,7 +72,8 @@ pub struct SetupContextSeed {
pub progress: FullProgressTracker,
pub task: OnceCell<NonDetachingJoinHandle<()>>,
pub result: OnceCell<Result<(SetupResult, RpcContext), Error>>,
pub shutdown: Sender<()>,
pub disk_guid: OnceCell<Arc<String>>,
pub shutdown: Sender<Option<Shutdown>>,
pub rpc_continuations: RpcContinuations,
}
@@ -84,6 +86,8 @@ impl SetupContext {
config: &ServerConfig,
) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
let mut progress = FullProgressTracker::new();
progress.enable_logging(true);
Ok(Self(Arc::new(SetupContextSeed {
webserver: webserver.acceptor_setter(),
config: config.clone(),
@@ -94,9 +98,10 @@ impl SetupContext {
)
})?,
disable_encryption: config.disable_encryption.unwrap_or(false),
progress: FullProgressTracker::new(),
progress,
task: OnceCell::new(),
result: OnceCell::new(),
disk_guid: OnceCell::new(),
shutdown,
rpc_continuations: RpcContinuations::new(),
})))
@@ -172,7 +177,8 @@ impl SetupContext {
if let Some(progress) = progress {
ws.send(ws::Message::Text(
serde_json::to_string(&progress)
.with_kind(ErrorKind::Serialization)?,
.with_kind(ErrorKind::Serialization)?
.into(),
))
.await
.with_kind(ErrorKind::Network)?;

Some files were not shown because too many files have changed in this diff Show More