Compare commits

...

255 Commits

Author SHA1 Message Date
Aiden McClelland
0c3d0dd525 disable concurrency and delete tmpdir before retry (#1846)
* disable concurrency and delete tmpdir before retry

* undo retry

* really limit usage of pgloader

* configurable

* no migrate notifications
2022-10-04 15:14:15 -06:00
Mariusz Kogen
1388632562 📸 screenshots update (#1853)
* images update

* proper file rename
2022-10-04 09:36:01 -06:00
Matt Hill
771ecaf3e5 update patch db (#1852) 2022-10-03 12:17:12 -06:00
Mariusz Kogen
2000a8f3ed 💫 Rebranding to embassyOS (#1851)
* name change to embassyOS

* filename change

* motd ASCII update

* some leftovers
2022-10-03 12:13:48 -06:00
Matt Hill
719cd5512c show connection bar right away (#1849) 2022-10-03 09:59:49 -06:00
Aiden McClelland
afb4536247 retry pgloader up to 5x (#1845) 2022-09-29 13:45:28 -06:00
Aiden McClelland
71b19e6582 handle multiple image tags having the same hash and increase timeout (#1844) 2022-09-29 09:52:04 -06:00
Lucy C
f37cfda365 update ts matches to fix properties ordering bug (#1843) 2022-09-29 08:42:26 -06:00
Aiden McClelland
f63a841cb5 reduce patch-db log level to warn (#1840) 2022-09-28 10:12:00 -06:00
Aiden McClelland
d469e802ad update patch db and enable logging (#1837) 2022-09-27 15:36:42 -06:00
Lucy C
1702c07481 Seed patchdb UI data (#1835)
* adjust types
for patchdb ui data and create seed file

* feat: For init and the migration use defaults

* fix update path

* update build for ui seed file

* fix accidential revert

* chore: Convert to do during the init

* chore: Update the commit message

Co-authored-by: BluJ <mogulslayer@gmail.com>
2022-09-26 17:36:37 -06:00
Aiden McClelland
31c5aebe90 play song during update (#1832)
* play song

* change song
2022-09-26 16:52:12 -06:00
Matt Hill
8cf84a6cf2 give name to logs file (#1833)
* give name to logs file
2022-09-26 15:40:38 -06:00
Aiden McClelland
18336e4d0a update patch-db (#1831) 2022-09-26 14:45:16 -06:00
Thomas Moerkerken
abf297d095 Bugfix/correctly package backend job (#1826)
* use makefile to create backend tar in pipeline

* correct for multiple architecture builds

* move duplication to shared workflow
2022-09-26 13:59:14 -06:00
Matt Hill
061a350cc6 Multiple (#1823)
* display preference for suto check and better messaging on properties page

* improve logs by a lot

* clean up

* fix searchbar and url in marketplace
2022-09-23 14:51:28 -06:00
Aiden McClelland
c85491cc71 ignore file not found error for delete (#1822) 2022-09-23 10:08:33 -06:00
Aiden McClelland
8b794c2299 perform system rebuild after updating (#1820)
* perform system rebuild after updating

* cleanup
2022-09-22 12:05:25 -06:00
Matt Hill
11b11375fd update license (#1819)
* update license

* update date
2022-09-22 12:03:44 -06:00
Aiden McClelland
c728f1a694 restructure initialization (#1816)
* reorder enabling of systemd-resolved

* set dns at end

* don't disable interfaces

* let networkmanager manage ifupdown

* restructure initialization

* use pigz when available

* cleanup

* fetch key before adding registry

* fix build

* update patch-db

* fix build

* fix build

* wait for network reinit

* add dynamic wait for up to 60s for network to reinit
2022-09-22 11:40:36 -06:00
Lucy C
28f9fa35e5 Fix/encryption (#1811)
* change encryption to use pubkey and only encrypt specific fields

* adjust script names for convenience

* remove unused fn

* fix build script name

* augment mocks

* remove log

* fix prod build

* feat: backend keys

* fix: Using the correct name with the public key

* chore: Fix the type for the encrypted

* chore: Add some tracing

* remove aes-js from package lock file

Co-authored-by: BluJ <mogulslayer@gmail.com>
2022-09-21 14:03:05 -06:00
Lucy C
f8ea2ebf62 add descriptions to marketplace list page (#1812)
* add descriptions to marketplace list page

* clean up unused styling

* rip descriptions from registry marketplace, use binary choice custom default and alternative messages

* cleanup

* fix selected type and remove uneeded conditional

* conditional color

* cleanup

* better comparision of marketplace url duplicates

* add logic to handle marketplace description display based on url

* decrease font size

* abstract helper fn to get url hostname; add error toast when adding duplicate marketplace

* move helper function to more appropriate file location

* rework marketplace list and don't worry about patch db firing before bootstrapped

* remove aes-js

* reinstall aes just to please things for now

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2022-09-21 12:41:19 -06:00
Mariusz Kogen
7575e8c1de 🏦 Start as embassy hostname from the begining (#1814)
* 🏦 start as embassy hostname from the begining

By this, we are limiting the number of hostname changes (from 3 to 2) during the first initial Embassy startup.

* Hostname change no longer needed
2022-09-21 10:56:52 -06:00
Mariusz Kogen
395db5f1cf 🧅 replace instead of adding tor repository (#1813)
I also removed the src line because we won't use the tor source repository anytime soon if ever.
2022-09-21 10:56:01 -06:00
J M
ee1acda7aa fix: Minor fix that matt wanted (#1808)
* fix: Minor fix that matt wanted

* Update backend/src/hostname.rs
2022-09-19 13:05:38 -06:00
Matt Hill
1150f4c438 clean up code and logs (#1809)
abstract base64 functions and clean up console logs
2022-09-19 12:59:41 -06:00
Matt Hill
f04b90d9c6 fix mrketplace swtiching (#1810) 2022-09-19 12:59:29 -06:00
Lucy C
53463077df Bugfix/marketplace add (#1805)
* remove falsey check when getting marketplaces, as no alts could exist

* filter boolean but start with object

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2022-09-15 13:30:49 -06:00
Matt Hill
e326c5be4a better RPC error message (#1803) 2022-09-15 13:30:27 -06:00
Aiden McClelland
e199dbc37b prevent cfg str generation from running forever (#1804) 2022-09-15 13:29:46 -06:00
J M
2e8bfcc74d fix: Deep is_parent was wrong and could be escapped (#1801)
* fix: Deep is_parent was wrong and could be escapped

* Update lib.rs
2022-09-15 12:53:56 -06:00
Aiden McClelland
ca53793e32 stop leaking avahi clients (#1802)
* stop leaking avahi clients

* separate avahi to its own binary
2022-09-15 12:53:46 -06:00
Mariusz Kogen
a5f31fbf4e 🎚️ reclaiming that precious RAM memory (#1799)
* 🎚️ reclaiming that precious RAM memory

This adds 60MB of usable RAM

* 🎚️ reclaiming that precious RAM memory - sudo

* Update build/write-image.sh

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2022-09-14 11:24:43 -06:00
Matt Hill
40d47c9f44 fix duplicate patch updates, add scroll button to setup success (#1800)
* fix duplicate patch updates, add scroll button to setup success

* update path

* update patch

* update patch
2022-09-14 11:24:22 -06:00
J M
67743b37bb fix: Bad cert of *.local.local is now fixed to correct. (#1798) 2022-09-12 16:27:46 -06:00
Aiden McClelland
36911d7ed6 use base64 for HTTP headers (#1795)
* use base64 for HTTP headers

* fe for base64 headers

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2022-09-12 13:20:49 -06:00
Aiden McClelland
5564154da2 update backend dependencies (#1796)
* update backend dependencies

* update compat
2022-09-12 13:20:40 -06:00
Matt Hill
27f9869b38 fix search to return more accurate results (#1792) 2022-09-10 13:31:39 -06:00
Aiden McClelland
f274747af3 fix init to exit on failure (#1788) 2022-09-10 13:31:03 -06:00
Matt Hill
05832b8b4b expect ui marketplace to be undefined (#1787) 2022-09-09 12:24:30 -06:00
Aiden McClelland
b9ce2bf2dc 0.3.2 final cleanup (#1782)
* bump version with stubbed release notes

* increase BE timeout

* 032 release notes

* hide developer menu for now

* remove unused sub/import

* remoce reconnect from disks res in setup wiz

* remove quirks

* flatten drives response

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2022-09-08 16:14:42 -06:00
J M
5442459b2d fix: Js deep dir (#1784)
* fix: Js deep dir

* Delete broken.log

* Delete test.log

* fix: Remove  the == parent
2022-09-08 13:57:13 -06:00
Stephen Chavez
f0466aaa56 pinning cargo dep versions for CLI (#1775)
* pinning cargo dep versions for CLI

* add --locked to the workflow

Co-authored-by: Stephen Chavez <stephen@start9labs.com>
2022-09-07 09:25:14 -06:00
Matt Hill
50111e37da remove product key from setup flow (#1750)
* remove product key flow from setup

* feat: backend turned off encryption + new Id + no package id

* implement new encryption scheme in FE

* decode response string

* crypto not working

* update setup wizard closes #1762

* feat: Get the encryption key

* fix: Get to recovery

* remove old code

* fix build

* fix: Install works for now

* fix bug in config for adding new list items

* dismiss action modal on success

* clear button in config

* wip: Currently broken in avahi mdns

* include headers with req/res and refactor patchDB init and usage

* fix: Can now run in the main

* flatline on failed init

* update patch DB

* add last-wifi-region to data model even though not used by FE

* chore: Fix the start.

* wip: Fix wrong order for getting hostname before sql has been
created

* fix edge case where union keys displayed as new when not new

* fix: Can start

* last backup color, markdown links always new tab, fix bug with login

* refactor to remove WithRevision

* resolve circular dep issue

* update submodule

* fix patch-db

* update patchDB

* update patch again

* escape error

* decodeuricomponent

* increase proxy buffer size

* increase proxy buffer size

* fix nginx

Co-authored-by: BluJ <mogulslayer@gmail.com>
Co-authored-by: BluJ <dragondef@gmail.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
2022-09-07 09:25:01 -06:00
Aiden McClelland
76682ebef0 switch to postgresql (#1763)
switch sqlx to postgresql
2022-09-01 10:32:01 -06:00
Matt Hill
705653465a use hostname from patchDB as default server name (#1758)
* replace offline toast with global indicator

* use hostname from patchDB as default server name

* add alert to marketplace delete and reword logout alert
2022-08-29 14:59:09 -06:00
Lucy C
8cd2fac9b9 Fix/empty properties (#1764)
* refactor properties utilities with updated ts-matches and allow for a null file

* update pacakges
2022-08-29 09:58:49 -06:00
Aiden McClelland
b2d7f4f606 [feat]: resumable downloads (#1746)
* optimize tests

* add test

* resumable downloads
2022-08-24 15:22:49 -06:00
Stephen Chavez
2dd31fa93f Disable bluetooth properly #862 (#1745)
add disable cmd for uart

Co-authored-by: Stephen Chavez <stephen@start9labs.com>
2022-08-22 14:16:11 -06:00
Thomas Moerkerken
df20d4f100 Set pipeline job timeouts and add ca-certificates to test container (#1753)
* set job timeout

* add ca-cert package to test container

* enable unittest

* include apt update

* set yes

* skip logs::test_logs unittest
2022-08-22 13:43:29 -06:00
Matt Hill
3ddeb5fa94 [Fix] websocket connecting and patchDB connection monitoring (#1738)
* refactor how we handle rpc responses and patchdb connection monitoring

* websockets only

* remove unused global error handlers

* chore: clear storage inside auth service

* feat: convert all global toasts to declarative approach (#1754)

* no more reference to serverID

Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: waterplea <alexander@inkin.ru>
2022-08-22 10:53:52 -06:00
Thomas Moerkerken
70baed88f4 add x86 build and run unittests to backend pipeline (#1682)
* add build backend for x86_64

* remove unnecessary build steps

* install backend dependency

* build and run backend unittest

* fix cache key

* buildx is required, qemu is not.

* Add missing steps from Makefile

* fix build_command var is unused

* select more common test env

* update pipeline names and docs

* use variable for node version

* use correct artifact name

* update pipeline references

* skip unittest that needs ca-cert installed

* correct pipeline name

* use nextest pre-built binary; add term color

* fix cache permissions warning

* update documentation
2022-08-18 10:24:17 -06:00
Aiden McClelland
5ba0d594a2 Bugfix/dns (#1741)
* keep openresolv

* fix init
2022-08-18 10:23:04 -06:00
Stephen Chavez
6505c4054f Feat: HttpReader (#1733)
* The start of implementing http_reader, issue #1644

* structure done

* v1 of http_reader

* first http_header poc

* Add AsyncSeek trait

* remove import

* move import

* fix typo

* add error checking for async seek

* fix handling of positions

* rename variables

* Update backend/src/util/http_reader.rs

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>

* clean up work

* improve error handling, and change if statement.

Co-authored-by: Stephen Chavez <stephen@start9labs.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2022-08-18 10:22:51 -06:00
Matt Hill
e1c30a918b highlight instructions if not viewed (#1731) 2022-08-15 11:03:11 -06:00
Chris Guida
f812e208fa fix cli install (#1720) 2022-08-10 18:03:20 -05:00
Lucy C
9e7526c191 fix build for patch-db client for consistency (#1722) 2022-08-10 16:48:44 -06:00
Aiden McClelland
07194e52cd Update README.md (#1728) 2022-08-10 16:35:53 -06:00
Chris Guida
2f8d825970 [Feat] follow logs (#1714)
* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>
2022-08-03 12:06:25 -06:00
J M
c44eb3a2c3 fix: Add modification to the max_user_watches (#1695)
* fix: Add modification to the max_user_watches

* chore: Move to initialization
2022-08-02 10:44:49 -06:00
Aiden McClelland
8207770369 write image to sparse-aware archive format (#1709) 2022-08-01 15:39:36 -06:00
Chris Guida
365952bbe9 Add build-essential to README.md (#1716)
Update README.md
2022-08-01 15:38:05 -06:00
Matt Hill
5404ebce1c observe response not events in http reqs to effectively use firstValueFrom 2022-07-29 17:43:23 -06:00
Matt Hill
13411f1830 closes #1710 2022-07-29 12:45:21 -06:00
Matt Hill
43090c9873 closes #1706 2022-07-29 12:45:21 -06:00
Matt Hill
34000fb9f0 closes #1711 2022-07-29 12:45:21 -06:00
Matt Hill
c2f9c6a38d display logs in local time while retaining iso format 2022-07-29 12:45:21 -06:00
Alex Inkin
a5c97d4c24 feat: migrate to Angular 14 and RxJS 7 (#1681)
* feat: migrate to Angular 14 and RxJS 7

* chore: update ng-qrcode

* chore: update patch-db

* chore: remove unnecessary generics
2022-07-27 21:31:46 -06:00
Aiden McClelland
9514b97ca0 Update README.md (#1703) 2022-07-27 18:10:52 -06:00
kn0wmad
22e84cc922 Update README.md (#1705)
* Update README.md

Updated release to 0.3.1

* use tag called latest

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2022-07-27 18:10:06 -06:00
Aiden McClelland
13b97296f5 formatting (#1698) 2022-07-27 18:00:48 -06:00
Aiden McClelland
d5f7e15dfb fix typo (#1702) 2022-07-26 17:32:37 -06:00
Aiden McClelland
7bf7b1e71e NO_KEY for CI images (#1700) 2022-07-26 17:32:28 -06:00
Matt Hill
7b17498722 set Matt as default assignee (#1697) 2022-07-26 15:12:31 -06:00
Aiden McClelland
3473633e43 sync blockdev after update (#1694) 2022-07-26 10:34:23 -06:00
Aiden McClelland
f455b8a007 ask for sudo password immediately during make (#1693) 2022-07-26 10:32:32 -06:00
Aiden McClelland
daabba12d3 honor shutdown from diagnostic ui (#1692) 2022-07-25 20:21:15 -06:00
Matt Hill
61864d082f messaging for restart, shutdown, rebuild (#1691)
* messaging for restart, shutdown, rebuild

* fix typo

* better messaging
2022-07-25 15:28:53 -06:00
Aiden McClelland
a7cd1e0ce6 sync data to fs before shutdown (#1690) 2022-07-25 15:23:40 -06:00
Matt Hill
0dd6d3a500 marketplace published at for service (#1689)
* at published timestamp to marketplace package show

* add todo
2022-07-25 12:47:50 -06:00
Aiden McClelland
bdb906bf26 add marketplace_url to backup metadata for service (#1688) 2022-07-25 12:43:40 -06:00
Aiden McClelland
61da050fe8 only validate mounts for inject if eos >=0.3.1.1 (#1686)
only validate mounts for inject if `>=0.3.1.1`
2022-07-25 12:20:24 -06:00
Matt Hill
83fe391796 replace bang with question mark in html (#1683) 2022-07-25 12:05:44 -06:00
Aiden McClelland
37657fa6ad issue notification when individual package restore fails (#1685) 2022-07-25 12:02:41 -06:00
Aiden McClelland
908a945b95 allow falsey rpc response (#1680)
* allow falsey rpc response

* better check for rpc error and remove extra function

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2022-07-25 10:16:04 -06:00
Aiden McClelland
36c720227f allow server.update to update to current version (#1679) 2022-07-22 14:10:09 -06:00
J M
c22c80d3b0 feat: atomic writing (#1673)
* feat: atomic writing

* Apply suggestions from code review

* clean up temp files on error

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2022-07-22 14:08:49 -06:00
Aiden McClelland
15af827cbc add standby mode (#1671)
* add standby mode

* fix standby mode to go before drive mount
2022-07-22 13:13:49 -06:00
Matt Hill
4a54c7ca87 draft releases notes for 0311 (#1677) 2022-07-22 11:17:18 -06:00
Alex Inkin
7b8a0eadf3 chore: enable strict mode (#1569)
* chore: enable strict mode

* refactor: remove sync data access from PatchDbService

* launchable even when no LAN url

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2022-07-22 09:51:08 -06:00
Aiden McClelland
9a01a0df8e refactor build process (#1675)
* add nc-broadcast to view initialization.sh logs

* include stderr

* refactor build

* add frontend/config.json as frontend dependency

* fix nc-broadcast

* always run all workflows

* address dependabot alerts

* fix build caching

* remove registries.json

* more efficient build
2022-07-21 15:18:44 -06:00
Aiden McClelland
ea2d77f536 lower log level for docker deser fallback message (#1672) 2022-07-21 12:13:46 -06:00
Aiden McClelland
e29003539b trust local ca (#1670) 2022-07-21 12:11:47 -06:00
Lucy C
97bdb2dd64 run build checks only when relevant FE changes (#1664) 2022-07-20 20:22:26 -06:00
Aiden McClelland
40d446ba32 fix migration, add logging (#1674)
* fix migration, add logging

* change stack overflow to runtime error
2022-07-20 16:00:25 -06:00
J M
5fa743755d feat: Make the rename effect (#1669)
* feat: Make the rename effect

* chore: Change to dst and src

* chore: update the remove file to use dst src
2022-07-20 13:42:54 -06:00
J M
0f027fefb8 chore: Update to have the new version 0.3.1.1 (#1668) 2022-07-19 10:03:14 -06:00
Matt Hill
56acb3f281 Mask chars beyond 16 (#1666)
fixes #1662
2022-07-18 18:31:53 -06:00
Lucy C
5268185604 add readme to system-images folder (#1665) 2022-07-18 15:48:42 -06:00
J M
635c3627c9 feat: Variable args (#1667)
* feat: Variable args

* chore: Make the assert error message not wrong
2022-07-18 15:46:32 -06:00
Chris Guida
009f7ddf84 sdk: don't allow mounts in inject actions (#1653) 2022-07-18 12:26:00 -06:00
J M
4526618c32 fix: Resolve fighting with NM (#1660) 2022-07-18 09:44:33 -06:00
Matt Hill
6dfd46197d handle case where selected union enum is invalid after migration (#1658)
* handle case where selected union enum is invalid after migration

* revert necessary ternary and fix types

Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2022-07-17 13:42:36 -06:00
Aiden McClelland
778471d3cc Update product.yaml (#1638) 2022-07-13 11:15:45 -06:00
Aiden McClelland
bbcf2990f6 fix build (#1639) 2022-07-13 11:14:50 -06:00
Aiden McClelland
ac30ab223b return correct error on failed os download (#1636) 2022-07-12 12:48:18 -06:00
J M
50e7b479b5 Fix/receipts health (#1616)
* release lock on update progress (#1614)

* chore: remove the receipt

* chore: Remove the receipt

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2022-07-12 12:48:01 -06:00
Aiden McClelland
1367428499 update backend dependencies (#1637) 2022-07-12 12:18:12 -06:00
Mariusz Kogen
e5de91cbe5 🐋 docker stats fix (#1630)
* entry obsolete...

* 🐋 docker stats fix
2022-07-11 18:18:22 -06:00
J M
244260e34a chore: updating lock just from make clean; make (#1631) 2022-07-08 16:07:34 -06:00
Lucy C
575ed06225 fix display of comma between breakages (#1628) 2022-07-07 17:08:06 -06:00
Mariusz Kogen
b6fdc57888 Tor repository fix for arm64 (#1623)
When doing `sudo apt update` you get this:
`N: Skipping acquire of configured file 'main/binary-armhf/Packages' as repository 'https://deb.torproject.org/torproject.org bullseye InRelease' doesn't support architecture 'armhf'`
2022-07-07 15:34:54 -06:00
Aiden McClelland
758d7d89c2 keep status if package has no config (#1626) 2022-07-07 15:34:06 -06:00
Aiden McClelland
2db31b54e8 fix icons for sideloaded packages (#1624) 2022-07-07 15:33:53 -06:00
Thomas Moerkerken
99d16a37d5 Fix/adjust pipeline (#1619)
* use the correct frontend make target

* allow interactive tty if available

* fix syntax on pipeline trigger paths
2022-07-06 17:10:35 -06:00
Lucy C
449968bc4e Fix/misc UI (#1622)
* show available marketplace updates in menu

* complete feature

* delete unused class

* update tsmatches to remove console log

* fix merge conflict

* change config header font size

* fix new options emission for config elements

* delete unecessary import

* add custom modal for service marketplace conflict action

* cleanup

* remove unecessary imports

* pr cleanup of unused imports and classes

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2022-07-06 17:10:22 -06:00
Aiden McClelland
b0a55593c1 handle current-dependents properly during update (#1621) 2022-07-06 10:34:03 -06:00
Lucy C
17ef97c375 remove beta flag actions from UI config build (#1617)
* remove beta flag

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2022-07-05 13:49:53 -06:00
Thomas Moerkerken
36e0ba0f06 Add basic GitHub workflows builds (#1578)
* add easy target for backend build

* add reusable backend build workflow

* add reusable frontend build workflow

* add full build workflow

* add some comments
2022-07-05 13:46:06 -06:00
gStart9
b365a60c00 Default to https:// urls for repositories, remove apt-transport-https (#1610)
As of apt 1.5 (released 2017), the package apt-transport-https is no longer required because https:// is supported out of the box.
Reference: https://packages.debian.org/bullseye/apt-transport-https "This is a dummy transitional package - https support has been moved into the apt package in 1.5. It can be safely removed."  Apt is currently at 2.2.4.

Use a sed one-liner to convert all repos in /etc/apt/sources.list and /etc/apt/sources.list.d/*.list that are http:// to https:// (https:// is available for all http:// URLs currently referenced in EmbassyOS).
2022-07-05 13:45:10 -06:00
Lucy C
88afb756f5 show available marketplace updates in menu (#1613)
* show service updates in menu
2022-07-05 13:02:32 -06:00
Aiden McClelland
e2d58c2959 release lock on update progress (#1614) 2022-07-05 12:01:54 -06:00
Lucy C
3cfc333512 fix conditional display state (#1612)
* fix conditional display state

* fix footer

* fix empty case

* remove select all from backup restore

* fix styling and add warning message to service restore

* update copy
2022-07-04 15:06:23 -06:00
J M
89da50dd37 fix: Closed file (#1608) 2022-07-04 14:17:43 -06:00
Matt Hill
9319314672 display bottom item in backup list and refactor for cleanliness (#1609)
* display bottom item in backup list and refactor for cleanliness

* fix spelling mistake

* display initial toggle to deselect all, as all are selected by default

* add select/deselect all to backup restore and handle backup case when no services intalled

Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2022-07-04 14:16:18 -06:00
Lucy C
6d805ae941 Fix/UI misc (#1606)
* stop expansion when description icon clicked

* add test for ensuring string sanitization

* rename log out to terminate in sessions component and remove sanitization bypass as unneeded

* remove unecessary instances of safe string
2022-07-01 18:25:45 -06:00
J M
8ba932aa36 feat: fetch effect (#1605)
* feat: fetch effect

* fix: Make not available in sandbox

* chore: Update to use text(), and to use headers

* chore: use the postman echo for testing

* chore: add json

* chore: Testing the json

* chore: Make the json lazy
2022-07-01 17:05:01 -06:00
Aiden McClelland
b580f549a6 actually purge old current-dependents 2022-07-01 16:06:01 -06:00
Lucy C
cb9c01d94b strip html from colors from logs (#1604) 2022-07-01 15:41:09 -06:00
Aiden McClelland
f9b0f6ae35 don't crash service if io-format is set for main (#1599) 2022-07-01 09:29:11 -06:00
Lucy C
1b1ff05c81 fix html parsing in logs (#1598) 2022-06-30 16:41:55 -06:00
Matt Hill
7b465ce10b nest new entries and message updates better (#1595)
* nest new entries and message updates better

* pass has new upward

* fix bulb display to make everyone happy
2022-06-30 15:32:54 -06:00
J M
ee66395dfe chore: commit the snapshots (#1592) 2022-06-30 12:39:51 -06:00
J M
31af6eeb76 fix: Stop the buffer from dropped pre-maturly (#1591) 2022-06-30 12:14:57 -06:00
Lucy C
e9a2d81bbe add select/deselect all to backups and enum lists (#1590) 2022-06-30 12:02:16 -06:00
Aiden McClelland
7d7f03da4f filter package ids when backing up (#1589) 2022-06-30 11:23:01 -06:00
Lucy C
8966b62ec7 update patchdb for array patch fix (#1588) 2022-06-29 17:51:20 -06:00
Aiden McClelland
ec8d9b0da8 switch to utc 2022-06-29 15:43:01 -06:00
Matt Hill
38ba1251ef turn chevron red in config if error (#1586) 2022-06-29 14:55:39 -06:00
Matt Hill
005c46cb06 preload redacted and visibility hidden (#1584)
* preload redacted and visibility hidden

* remove comment

* update patchdb

Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2022-06-29 09:13:52 -06:00
Aiden McClelland
4b0ff07d70 Bugfix/backup lock order (#1583)
* different fix

* Revert "fix backup lock ordering (#1582)"

This reverts commit f1e065a448.
2022-06-29 09:03:10 -06:00
Aiden McClelland
f1e065a448 fix backup lock ordering (#1582) 2022-06-28 15:10:26 -06:00
J M
c82c6eaf34 fix: Properties had a null description (#1581)
* fix: Properties had a null description

* Update frontend/projects/ui/src/app/util/properties.util.ts
2022-06-28 13:57:51 -06:00
Matt Hill
b8f3759739 update welcome notes for 031 (#1580) 2022-06-28 13:41:05 -06:00
kn0wmad
70aba1605c Feat/use modern tor (#1575)
* Add guardian project repo and install latest stable tor

* Apt appendage
2022-06-28 12:28:08 -06:00
Matt Hill
2c5aa84fe7 selective backups and better drive selection interface (#1576)
* selective backups and better drive selection interface

* fix disabled checkbox and backup drives menu styling

* feat: package-ids

* only show services that are backing up on backup page

* refactor for performace and cleanliness

Co-authored-by: Matt Hill <matthill@Matt-M1.start9.dev>
Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
Co-authored-by: J M <mogulslayer@gmail.com>
2022-06-28 12:14:26 -06:00
Aiden McClelland
753f395b8d add avahi conditional compilation flags to dns (#1579) 2022-06-28 10:42:54 -06:00
Lucy C
f22f11eb58 Fix/sideload icon type (#1577)
* add content type to icon dataURL

* better handling of blob reading; remove verifying loader and reorganize html

* clean up PR feedback and create validation fn instead of boolean

* grpup upload state into one type

* better organize validation

* add server id to eos check for updates req

* fix patchdb to latest

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
2022-06-27 15:25:42 -06:00
Lucy C
123f71cb86 Fix/mask generic inputs (#1570)
* add masking to generic input component

* clear inputs after submission

* adjust convenience FE make steps

* cleaner masking

* remove mask pipe from module

* switch to redacted font
2022-06-27 13:51:35 -06:00
Aiden McClelland
22af45fb6e add dns server to embassy-os (#1572)
* add dns server to embassy-os

* fix initialization

* multiple ip addresses
2022-06-27 10:53:06 -06:00
Matt Hill
0849df524a fix connection failure display monitoring and other style changes (#1573)
* fix connection failure display monitoring and other style chnages

* display updates more clearly in marketplace

* remove scrolling from release notes and long description

* remove unnecessary bangs

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
Co-authored-by: Matt Hill <matthill@Matt-M1.start9.dev>
2022-06-27 10:44:12 -06:00
Lucy C
31952afe1e adjust service marketplace button for installation source relevance (#1571)
* adjust service marketplace button for installation source relevance

* cleanup

* show marketplace name instead of url; cleanup from PR feedback

* fix spacing

* further cleanup
2022-06-27 10:09:27 -06:00
Matt Hill
83755e93dc kill all sessions and remove ripple effect (#1567)
* button to kill all sessions, session sorting, remove ripple effect from buttons

* pr cleanup

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2022-06-23 16:59:13 -06:00
J M
0fbcc11f99 fix: Make it so we only need the password on the backup (#1566)
* fix: Make it so we only need the password on the backup

* chore: Remove the rewrite of password
2022-06-23 10:25:47 -06:00
Matt Hill
d431fac7de fix bugs with config and clean up dev options (#1558)
* fix bugs with config and clean up dev options

* dont down down arrow in logs prematurely

* change config error border to match error text red color

* change restart button color

* fix error when sideloading and update copy

* adds back in param cloning as this bug creeped up again

* make restarting text match button color

* fix version comparision for updates category

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2022-06-22 18:26:10 -06:00
Alex Inkin
53ca9b0420 refactor(patch-db): use PatchDB class declaratively (#1562)
* refactor(patch-db): use PatchDB class declaratively

* chore: remove initial source before init

* chore: show spinner

* fix: show Connecting to Embassy spinner until first connection

* fix: switching marketplaces

* allow for subscription to end with take when installing a package

* update patchdb

Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2022-06-22 16:09:14 -06:00
BitcoinMechanic
a8749f574a fixed sentence that didn't make sense (#1565) 2022-06-22 09:57:10 -07:00
J M
a9d839fd8f fix: Missing a feature flat cfg (#1563) 2022-06-21 10:34:03 -06:00
J M
477d37f87d feat: Make sdk (#1564) 2022-06-21 10:25:54 -06:00
Matt Hill
d2195411a6 Reset password through setup wizard (#1490)
* closes FE portion of  #1470

* remove accidental commit of local script

* add reset password option (#1560)

* fix error code for incorrect password and clarify codes with comments

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2022-06-20 16:48:32 -06:00
J M
1f5e6dbff6 chore: Add tracing for debuging the js procedure slowness (#1552)
* chore: Add tracing for debuging the js procedure slowness

* chore: Make the display to reduce vertical clutter
2022-06-20 16:22:19 -06:00
Lucy C
09c0448186 update should send version not version spec (#1559) 2022-06-20 14:18:20 -06:00
Lucy C
b318bf64f4 fix backend builds for safe git config (#1549) 2022-06-19 14:25:39 -06:00
Lucy Cifferello
af1d2c1603 update welcome message 2022-06-19 13:46:09 -06:00
Lucy Cifferello
1c11d3d08f update patch db 2022-06-19 13:46:09 -06:00
Lucy C
a4a8f33df0 Feature/restart service (#1554)
* add restart button to service show page and restart rpc api

* Feature/restart rpc (#1555)

* add restart rpc and status

* wire up rpc

* add restarting bool

Co-authored-by: Aiden McClelland <me@drbonez.dev>

* check if service is restarting

* filter package when restarting to avoid glitch

Co-authored-by: Aiden McClelland <me@drbonez.dev>
2022-06-19 13:46:09 -06:00
Lucy Cifferello
889cf03c1c fix circular images in instructions markdown modal 2022-06-19 13:46:09 -06:00
Matt Hill
0ac5b34f2d Remove app wiz and dry calls (#1541)
* no more app wiz or dry calls

* change spinner type

* better display for update available

* reintroduce dep breakages for update/downgrade and style alerts everywhere

* only show install alert on first install

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
Co-authored-by: Matt Hill <matthill@Matt-M1.start9.dev>
2022-06-19 13:46:09 -06:00
Lucy Cifferello
37304a9d92 chore: cleanup and small misc fixes
display success alert if on latest EOS after check for update

fix bug with loader dismiss after alert present

fix restart button on update complete alert and fix mocks to account for this state

fix make clean and adjust default registry names
2022-06-19 13:46:09 -06:00
Matt Hill
4ad9886517 refactor app wizards completely (#1537)
* refactor app wizards completely

* display new and new options in config

Co-authored-by: Matt Hill <matthill@Matt-M1.start9.dev>
2022-06-19 13:46:09 -06:00
Matt Hill
8e9d2b5314 chore: cleanup - show spinner on service list when transitioning
config add new list items to end and auto scroll

remove js engine artifacts

fix view button in notification toast
2022-06-19 13:46:09 -06:00
Lucy C
7916a2352f Feature/sideload (#1520)
* base styling and action placeholders for package sideload

* apparently didnt add new folder

* wip

* parse manifest and icon from s9pk to upload

* wip handle s9pk upload

* adjust types, finalize actions, cleanup

* clean up and fix data clearing and response

* include rest rpc in proxy conf sample

* address feedback to use shorthand falsy coercion

* update copy and invalid package file ux

* do not wait package upload, instead show install progress

* fix proxy for rest rpc

rename sideload package page titles
2022-06-19 13:46:09 -06:00
Matt Hill
2b92d0f119 Swtich certain inputs and displays to courier for readability (#1521)
swtich certain inputs and displays to courier for readability

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
2022-06-19 13:46:09 -06:00
Matt Hill
961a9342fa display QR code for interfaces (#1507)
* display QR code for interfaces

* add play-outline to preloader

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
2022-06-19 13:46:09 -06:00
Lucy C
3cde39c7ed Feature/copy logs (#1491)
* make text selectable on mobile

* make logs copyable and adjust copy format

* fix linting

* fix linting further

* linting

* add formatting to copied logs

* fix copy abstraction and add formatting for server log copy
2022-06-19 13:46:09 -06:00
Matt Hill
09922c8dfa Rework install progress types and pipes for clarity (#1513)
* rework install progress types and pipes for clarity

* rework marketplace show display

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
2022-06-19 13:46:09 -06:00
waterplea
0390954a85 feat: enable strictNullChecks
feat: enable `noImplicitAny`

chore: remove sync data access

fix loading package data for affected dependencies

chore: properly get alt marketplace data

update patchdb client to allow for emit on undefined values
2022-06-19 13:46:09 -06:00
J M
948fb795f2 feat: uid/gid/mode added to metadata (#1551) 2022-06-17 12:16:04 -06:00
J M
452c8ea2d9 Feat/js metadata (#1548)
* feat: metadata effect

* feat: Metadata for effects

* chore: Add in the new types
2022-06-16 15:58:48 -06:00
Aiden McClelland
9c41090a7a add textarea to ValueSpecString (#1534) 2022-06-16 13:14:18 -06:00
Aiden McClelland
59eee33767 fix dependency/dependent id issue (#1546) 2022-06-16 13:14:05 -06:00
Aiden McClelland
cc5e60ed90 fix incorrect error message for deserialization in ValueSpecString (#1547) 2022-06-16 13:13:50 -06:00
J M
27bc493884 Feat: Make the js check for health (#1543)
* Feat: Make the js check for health

* chore: Add in the migration types

* feat: type up the migration
2022-06-16 11:58:55 -06:00
J M
75a2b2d2ab chore: Update the lite types to include the union and enum (#1542) 2022-06-15 19:31:47 -06:00
J M
0b7d8b4db0 fix: found a unsaturaded args fix 2022-06-15 14:40:42 -06:00
J M
d05cd7de0d chore: Update types to match embassyd (#1539)
* chore: Update types to match embassyd

* chore: Undo the optional
2022-06-15 14:39:20 -06:00
Aiden McClelland
b0068a333b disable unnecessary services 2022-06-14 12:43:27 -06:00
Aiden McClelland
d947c2db13 fixes #1169 (#1533)
* fixes #1169

* consolidate trim

Co-authored-by: J M <2364004+Blu-J@users.noreply.github.com>

Co-authored-by: J M <2364004+Blu-J@users.noreply.github.com>
2022-06-14 11:47:04 -06:00
Aiden McClelland
90e09c8c25 add "error_for_status" to static file downloads 2022-06-14 11:42:31 -06:00
J M
dbf59a7853 fix: restart/ uninstall sometimes didn't work (#1527)
* fix: restart/ uninstall sometimes didn't work

* Fix: Match the original lock types
2022-06-13 14:18:41 -06:00
Aiden McClelland
4d89e3beba fixes a bug where nginx will crash if eos goes into diagnostic mode a… (#1506)
fixes a bug where nginx will crash if eos goes into diagnostic mode after service init has completed
2022-06-13 12:43:12 -06:00
J M
5a88f41718 Feat/js known errors (#1514)
* feat: known errors for js

* chore: add expected exports

* Update js_scripts.rs

* chore: Use agreed upon shape

* chore: add updates to d.ts

* feat: error case

* chore: Add expectedExports as a NameSpace`

* chore: add more documentation to the types.d.ts
2022-06-10 13:04:52 -06:00
Aiden McClelland
435956a272 fix "missing proxy" error in embassy-cli (#1516)
* fix "missing proxy" error in embassy-cli

* fix: Add test and other fix for SetResult

Co-authored-by: J M <mogulslayer@gmail.com>
2022-06-10 12:58:58 -06:00
Aiden McClelland
7854885465 allow interactive TTY if available 2022-06-08 09:29:24 -06:00
Keagan McClelland
901ea6203e fixes serialization of regex pattern + description 2022-06-07 17:32:47 -06:00
J M
9217d00528 Fix/memory leak docker (#1505)
* fix: potential fix for the docker leaking the errors and such

* chore: Make sure that the buffer during reading the output will not exceed 10mb ish

* Chore: Add testing

* fix: Docker buffer reading to lines now works

* chore: fixing the broken responses
2022-06-07 12:58:12 -06:00
J M
f234f894af fix: potential fix for the docker leaking the errors and such (#1496)
* fix: potential fix for the docker leaking the errors and such

* chore: Make sure that the buffer during reading the output will not exceed 10mb ish

* Chore: Add testing
2022-06-07 11:11:43 -06:00
Aiden McClelland
4286edd78f allow embassy-cli not as root (#1501)
* allow embassy-cli not as root
* clean up merge
2022-06-07 11:11:01 -06:00
Keagan McClelland
334437f677 generate unique ca names based off of server id 2022-06-06 18:56:27 -06:00
Keagan McClelland
183c5cda14 refactors error handling for less redundancy 2022-06-06 18:43:32 -06:00
Keagan McClelland
45265453cb control experiment for correct configs 2022-06-06 18:43:32 -06:00
Keagan McClelland
80a06272cc fixes regex black hole 2022-06-06 18:23:28 -06:00
J M
473213d14b chore: fix the master (#1495)
* chore: fix the master

* chore: commit all the things swc
2022-06-06 15:02:44 -06:00
Matt Hill
d53e295569 UI cosmetic improvements (#1486)
* resize alerts and modals

* fix log color

* closes #1404

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
2022-06-06 11:31:45 -06:00
Thomas Moerkerken
18e2c610bc add quotes to handles spaces in working dir 2022-06-06 10:10:51 -06:00
J M
e0c68c1911 Fix/patch db unwrap remove (#1481)
* fix: Change the git to always give a maybe, then create the error in the failure cases

* fix: No wifi last

* chore: Revert to older api

* fix: build for sdk

* fix: build for sdk

* chore: update patch db

* chore: use the master patch db
2022-06-06 09:52:19 -06:00
Mariusz Kogen
34729c4509 Enable Control Groups for Docker containers (#1468)
Enabling Control Groups give Docker containers the ability to track and expose missing memory metrics. Try `docker stats`
2022-06-06 08:32:33 -06:00
Matt Hill
ca778b327b Clean up config (#1484)
* better formatting for union list

* cleaner config

Co-authored-by: Matt Hill <matthill@Matt-M1.local>
2022-06-03 12:35:56 -06:00
Benjamin B
bde6169746 Update contribution and frontend readme (#1467)
* docs: small updates to contributing and frontend readme

* chore: change build:deps script to use npm ci
2022-06-03 12:26:45 -06:00
Lucy C
3dfbf2fffd UI version updates and welcome message for 0.3.1 (#1479) 2022-06-03 12:23:32 -06:00
Benjamin B
34068ef633 Link to tor address on LAN setup page (#1277) (#1466)
* style: format lan page component

* Link to tor address on LAN setup page (#1277)
2022-06-01 15:43:57 -06:00
Benjamin B
e11729013f Disable view in marketplace button when side-loaded (#1471)
Disble view in marketplace button when side-loaded
2022-06-01 15:20:45 -06:00
Thomas Moerkerken
cceef054ac remove interactive TTY requirement from cmds 2022-06-01 14:37:12 -06:00
J M
b8751e7add Chore/version 0 3 1 0 (#1475)
* feat: move over to workspaces

* chore: Move to libs

* chore:fix(build): Compat

* chore: fixing pr
2022-06-01 10:22:00 -06:00
Keagan McClelland
37344f99a7 cleanup after rebase 2022-05-27 13:20:33 -06:00
Keagan McClelland
61bcd8720d warn if script is present but manifest does not require one 2022-05-27 13:20:33 -06:00
Keagan McClelland
6801ff996e require script is present during pack step iff any pkg procs are type script 2022-05-27 13:20:33 -06:00
J M
c8fc9a98bf fix: Change the source + add input 2022-05-27 13:20:33 -06:00
J M
52de5426ad Feat: js action
wip: Getting async js

feat: Have execute get action config

feat: Read + Write

chore: Add typing for globals

chore: Change the default path, include error on missing function, and add json File Read Write

chore: Change the default path, include error on missing function, and add json File Read Write

wip: Fix the unit test

wip: Fix the unit test

feat: module loading
2022-05-27 13:20:33 -06:00
Benjamin B
e7d0a81bfe Fix links in CONTRIBUTING.md, update ToC 2022-05-27 11:45:57 -06:00
Alex Inkin
4f3223d3ad refactor: isolate network toast and login redirect to separate services (#1412)
* refactor: isolate network toast and login redirect to separate services

* chore: remove accidentally committed sketch of a service

* chore: tidying things up

* feat: add `GlobalModule` encapsulating all global subscription services

* remove angular build cache when building deps

* chore: fix more issues found while testing

* chore: fix issues reported by testing

* chore: fix template error

* chore: fix server-info

* chore: fix server-info

* fix: switch to Observable to fix race conditions

* fix embassy name display on load

* update patchdb

* clean up patch data watch

Co-authored-by: Lucy Cifferello <12953208+elvece@users.noreply.github.com>
2022-05-26 16:56:47 -06:00
J M
4829637b46 fix: Dependency vs dependents (#1462)
* fix: Dependency vs dependents

* chore: Remove the debugging
2022-05-26 15:39:46 -06:00
J M
7f2494a26b Fix/making js work (#1456)
* Feat: js action

wip: Getting async js

feat: Have execute get action config

feat: Read + Write

chore: Add typing for globals

chore: Change the default path, include error on missing function, and add json File Read Write

chore: Change the default path, include error on missing function, and add json File Read Write

wip: Fix the unit test

wip: Fix the unit test

feat: module loading

* fix: Change the source + add input

* fix: Change the source + add input

wip: Fix missing js files during running

fix: Change the source + add input

wip: Fix missing js files during running

* fix: other paths

* feat: Build the arm js snapshot

* fix: test with more

* chore: Make the is_subset a result
2022-05-25 12:19:40 -06:00
J M
f7b5fb55d7 Feat/js action (#1437)
* Feat: js action

wip: Getting async js

feat: Have execute get action config

feat: Read + Write

chore: Add typing for globals

chore: Change the default path, include error on missing function, and add json File Read Write

chore: Change the default path, include error on missing function, and add json File Read Write

wip: Fix the unit test

wip: Fix the unit test

feat: module loading

* fix: Change the source + add input

* fix: single thread runtime

* fix: Smaller fixes

* Apply suggestions from code review

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>

* fix: pr

Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2022-05-19 18:02:50 -06:00
Lucy C
2b6e54da1e Proxy local frontend to remote backend (#1452)
* add server proxy configurations

* change address to host due to compliation warning

* adjust config sample to more accurately reflect production version
2022-05-19 15:58:32 -06:00
Jonathan Zernik
1023916390 Add nginx config for proxy redirect (#1421)
* Add nginx config for proxy redirect protocol prefix

* Update proxy_redirect config to use scheme variable

* Only include proxy redirect directive when ssl is true
2022-05-19 14:56:53 -06:00
Keagan McClelland
6a0e9d5c0a refactor packer to async 2022-05-19 11:08:22 -06:00
Aiden McClelland
7b4d657a2d actually address warning instead of muting it like a sociopath 2022-05-16 16:46:59 -06:00
Keagan McClelland
b7e86bf556 cleanse warnings 2022-05-16 16:46:59 -06:00
Aiden McClelland
fa777bbd63 fix remaining rename 2022-05-16 16:23:46 -06:00
Aiden McClelland
2e7b2c15bc rename ActionImplementation to PackageProcedure 2022-05-16 16:23:46 -06:00
Keagan McClelland
9bc0fc8f05 formatting 2022-05-16 16:11:45 -06:00
Keagan McClelland
b354d30fe9 Update backend/src/s9pk/header.rs
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
2022-05-16 16:11:45 -06:00
Keagan McClelland
a253e95b5a make scripts optional 2022-05-16 16:11:45 -06:00
Keagan McClelland
7e4c0d660a fix paths 2022-05-16 16:11:45 -06:00
Keagan McClelland
6a8bf2b074 s/open/create 2022-05-16 16:11:45 -06:00
Keagan McClelland
16729ebffc change script path 2022-05-16 16:11:45 -06:00
Keagan McClelland
f44d432b6a no tar for scripts 2022-05-16 16:11:45 -06:00
Keagan McClelland
93ee418f65 redundant imports 2022-05-16 16:11:45 -06:00
Keagan McClelland
cd6bda2113 optional script unpacking 2022-05-16 16:11:45 -06:00
Keagan McClelland
4a007cea78 cleanup assets and scripts on uninstall 2022-05-16 16:11:45 -06:00
Keagan McClelland
ab532b4432 fix script name 2022-05-16 16:11:45 -06:00
Keagan McClelland
ee98b91a29 remove scripts volume 2022-05-16 16:11:45 -06:00
Keagan McClelland
0294143b22 create script dirs on install 2022-05-16 16:11:45 -06:00
Keagan McClelland
2890798342 pack scripts into s9pk 2022-05-16 16:11:45 -06:00
Keagan McClelland
2d44852ec4 iterators can be played now 2022-05-16 11:24:14 -06:00
Keagan McClelland
b9de5755d1 fix error with circle of fifths type 2022-05-16 11:24:14 -06:00
Mariusz Kogen
84463673e2 ☯️ For the peace of mind ☯️ (#1444)
Simplifying and clarifying for first time builders
2022-05-16 11:09:11 -06:00
Dread
56efe9811d Update README.md to include yq (#1385)
Update README.md to include yq install instructions for Linux
2022-05-11 16:23:10 -06:00
Keagan McClelland
a6234e4507 adds product key to error message in setup flow when there is mismatch 2022-05-11 16:19:24 -06:00
Keagan McClelland
e41b2f6ca9 make nicer update sound 2022-05-11 16:15:56 -06:00
Lucy C
8cf000198f Fix/id params (#1414)
* watch config.json for changes when just building frontend

* fix version for data consistency

* clone param ids so not recursively stringified; add global type for stringified instances

* ensure only most recent data source grabbed to fix issue with service auto update on marketplace switch

* use take instead of shallow cloning data
2022-05-10 12:20:32 -06:00
J M
cc6cbbfb07 chore: Convert from ajv to ts-matches (#1415) 2022-05-10 11:00:56 -06:00
Mariusz Kogen
10d7a3d585 Switching SSH keys to start9 user (#1321)
* Update ssh.rs for start9 user

* .ssh directory for uid 1000 user

* Update init.rs for start9 user

* “His name is Robert Paulson”

* typo

* just cleaning up ...
2022-05-09 15:16:24 -06:00
J M
864555bcf0 Feat bulk locking (#1422)
* Feat: Multi-lock capabilities add to config

* wip: RPC.rs fixes, new combinatoric

* wip: changes

* chore: More things that are bulk

* fix: Saving

* chore: Remove a dyn object

* chore: Add tests + remove unused

* Fix/feat  bulk locking (#1427)

* fix: health check

* fix: start/stop service

* fix: install/uninstall services

* chore: Fix the notifications

* fix: Version

* fix: Version as serde

* chore: Update to latest patch db

* chore: Change the htLock to something that makes more sense

* chore: Fix the rest of the ht

* "chore: More ht_lock":
2022-05-09 14:53:39 -06:00
605 changed files with 44859 additions and 19940 deletions

View File

@@ -1,21 +1,21 @@
name: 🐛 Bug Report name: 🐛 Bug Report
description: Create a report to help us improve EmbassyOS description: Create a report to help us improve embassyOS
title: '[bug]: ' title: '[bug]: '
labels: [Bug, Needs Triage] labels: [Bug, Needs Triage]
assignees: assignees:
- dr-bonez - MattDHill
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Prerequisites label: Prerequisites
description: Please confirm you have completed the following. description: Please confirm you have completed the following.
options: options:
- label: I have searched for [existing issues](https://github.com/start9labs/embassy-os/issues) that already report this problem, without success. - label: I have searched for [existing issues](https://github.com/start9labs/embassy-os/issues) that already report this problem.
required: true required: true
- type: input - type: input
attributes: attributes:
label: EmbassyOS Version label: embassyOS Version
description: What version of EmbassyOS are you running? description: What version of embassyOS are you running?
placeholder: e.g. 0.3.0 placeholder: e.g. 0.3.0
validations: validations:
required: true required: true

View File

@@ -1,16 +1,16 @@
name: 💡 Feature Request name: 💡 Feature Request
description: Suggest an idea for EmbassyOS description: Suggest an idea for embassyOS
title: '[feat]: ' title: '[feat]: '
labels: [Enhancement] labels: [Enhancement]
assignees: assignees:
- dr-bonez - MattDHill
body: body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Prerequisites label: Prerequisites
description: Please confirm you have completed the following. description: Please confirm you have completed the following.
options: options:
- label: I have searched for [existing issues](https://github.com/start9labs/embassy-os/issues) that already suggest this feature, without success. - label: I have searched for [existing issues](https://github.com/start9labs/embassy-os/issues) that already suggest this feature.
required: true required: true
- type: textarea - type: textarea
attributes: attributes:
@@ -27,7 +27,7 @@ body:
- type: textarea - type: textarea
attributes: attributes:
label: Describe Preferred Solution label: Describe Preferred Solution
description: How you want this feature added to EmbassyOS? description: How you want this feature added to embassyOS?
- type: textarea - type: textarea
attributes: attributes:
label: Describe Alternatives label: Describe Alternatives

29
.github/workflows/README.md vendored Normal file
View File

@@ -0,0 +1,29 @@
# This folder contains GitHub Actions workflows for building the project
## backend
Runs: manually (on: workflow_dispatch) or called by product-pipeline (on: workflow_call)
This workflow uses the actions and docker/setup-buildx-action@v1 to prepare the environment for aarch64 cross complilation using docker buildx.
When execution of aarch64 containers is required the action docker/setup-qemu-action@v1 is added.
A matrix-strategy has been used to build for both x86_64 and aarch64 platforms in parallel.
### Running unittests
Unittests are run using [cargo-nextest]( https://nexte.st/). First the sources are (cross-)compiled and archived. The archive is then run on the correct platform.
## frontend
Runs: manually (on: workflow_dispatch) or called by product-pipeline (on: workflow_call)
This workflow builds the frontends.
## product
Runs: when a pull request targets the master or next branch and when a change to the master or next branch is made
This workflow builds everything, re-using the backend and frontend workflows.
The download and extraction order of artifacts is relevant to `make`, as it checks the file timestamps to decide which targets need to be executed.
Result: eos.img
## a note on uploading artifacts
Artifacts are used to share data between jobs. File permissions are not maintained during artifact upload. Where file permissions are relevant, the workaround using tar has been used. See (here)[https://github.com/actions/upload-artifact#maintaining-file-permissions-and-case-sensitive-files].

240
.github/workflows/backend.yaml vendored Normal file
View File

@@ -0,0 +1,240 @@
name: Backend
on:
workflow_call:
workflow_dispatch:
env:
RUST_VERSION: "1.62.1"
jobs:
build_libs:
name: Build libs
strategy:
fail-fast: false
matrix:
target: [x86_64, aarch64]
include:
- target: x86_64
snapshot_command: ./build-v8-snapshot.sh
artifact_name: js_snapshot
artifact_path: libs/js_engine/src/artifacts/JS_SNAPSHOT.bin
- target: aarch64
snapshot_command: ./build-arm-v8-snapshot.sh
artifact_name: arm_js_snapshot
artifact_path: libs/js_engine/src/artifacts/ARM_JS_SNAPSHOT.bin
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
if: ${{ matrix.target == 'aarch64' }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
if: ${{ matrix.target == 'aarch64' }}
- uses: actions-rs/toolchain@v1
with:
toolchain: ${{ env.RUST_VERSION }}
override: true
if: ${{ matrix.target == 'x86_64' }}
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
libs/target/
key: ${{ runner.os }}-cargo-libs-${{ matrix.target }}-${{ hashFiles('libs/Cargo.lock') }}
- name: Build v8 snapshot
run: ${{ matrix.snapshot_command }}
working-directory: libs
- uses: actions/upload-artifact@v3
with:
name: ${{ matrix.artifact_name }}
path: ${{ matrix.artifact_path }}
build_backend:
name: Build backend
strategy:
fail-fast: false
matrix:
target: [x86_64, aarch64]
include:
- target: x86_64
snapshot_download: js_snapshot
- target: aarch64
snapshot_download: arm_js_snapshot
runs-on: ubuntu-latest
timeout-minutes: 120
needs: build_libs
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Download ${{ matrix.snapshot_download }} artifact
uses: actions/download-artifact@v3
with:
name: ${{ matrix.snapshot_download }}
path: libs/js_engine/src/artifacts/
- uses: actions-rs/toolchain@v1
with:
toolchain: ${{ env.RUST_VERSION }}
override: true
if: ${{ matrix.target == 'x86_64' }}
- uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
backend/target/
key: ${{ runner.os }}-cargo-backend-${{ matrix.target }}-${{ hashFiles('backend/Cargo.lock') }}
- name: Install dependencies
run: sudo apt-get install libavahi-client-dev
if: ${{ matrix.target == 'x86_64' }}
- name: Check Git Hash
run: ./check-git-hash.sh
- name: Check Environment
run: ./check-environment.sh
- name: Build backend
run: cargo build --release --target x86_64-unknown-linux-gnu --locked
working-directory: backend
if: ${{ matrix.target == 'x86_64' }}
- name: Build backend
run: |
docker run --rm \
-v "/home/runner/.cargo/registry":/root/.cargo/registry \
-v "$(pwd)":/home/rust/src \
-P start9/rust-arm-cross:aarch64 \
sh -c 'cd /home/rust/src/backend &&
rustup install ${{ env.RUST_VERSION }} &&
rustup override set ${{ env.RUST_VERSION }} &&
rustup target add aarch64-unknown-linux-gnu &&
cargo build --release --target ${{ matrix.target }}-unknown-linux-gnu --locked'
if: ${{ matrix.target == 'aarch64' }}
- name: 'Tar files to preserve file permissions'
run: make ARCH=${{ matrix.target }} backend-${{ matrix.target }}.tar
- uses: actions/upload-artifact@v3
with:
name: backend-${{ matrix.target }}
path: backend-${{ matrix.target }}.tar
- name: Install nextest
uses: taiki-e/install-action@nextest
- name: Build and archive tests
run: cargo nextest archive --archive-file nextest-archive-${{ matrix.target }}.tar.zst --target ${{ matrix.target }}-unknown-linux-gnu
working-directory: backend
if: ${{ matrix.target == 'x86_64' }}
- name: Build and archive tests
run: |
docker run --rm \
-v "$HOME/.cargo/registry":/root/.cargo/registry \
-v "$(pwd)":/home/rust/src \
-P start9/rust-arm-cross:aarch64 \
sh -c 'cd /home/rust/src/backend &&
rustup install ${{ env.RUST_VERSION }} &&
rustup override set ${{ env.RUST_VERSION }} &&
rustup target add aarch64-unknown-linux-gnu &&
curl -LsSf https://get.nexte.st/latest/linux | tar zxf - -C ${CARGO_HOME:-~/.cargo}/bin &&
cargo nextest archive --archive-file nextest-archive-${{ matrix.target }}.tar.zst --target ${{ matrix.target }}-unknown-linux-gnu'
if: ${{ matrix.target == 'aarch64' }}
- name: Reset permissions
run: sudo chown -R $USER target
working-directory: backend
if: ${{ matrix.target == 'aarch64' }}
- name: Upload archive to workflow
uses: actions/upload-artifact@v3
with:
name: nextest-archive-${{ matrix.target }}
path: backend/nextest-archive-${{ matrix.target }}.tar.zst
run_tests_backend:
name: Test backend
strategy:
fail-fast: false
matrix:
target: [x86_64, aarch64]
include:
- target: x86_64
- target: aarch64
runs-on: ubuntu-latest
timeout-minutes: 60
needs: build_backend
env:
CARGO_TERM_COLOR: always
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
if: ${{ matrix.target == 'aarch64' }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
if: ${{ matrix.target == 'aarch64' }}
- run: mkdir -p ~/.cargo/bin
if: ${{ matrix.target == 'x86_64' }}
- name: Install nextest
uses: taiki-e/install-action@nextest
if: ${{ matrix.target == 'x86_64' }}
- name: Download archive
uses: actions/download-artifact@v3
with:
name: nextest-archive-${{ matrix.target }}
- name: Download nextest (aarch64)
run: wget -O nextest-aarch64.tar.gz https://get.nexte.st/latest/linux-arm
if: ${{ matrix.target == 'aarch64' }}
- name: Run tests
run: |
${CARGO_HOME:-~/.cargo}/bin/cargo-nextest nextest run --no-fail-fast --archive-file nextest-archive-${{ matrix.target }}.tar.zst \
--filter-expr 'not (test(system::test_get_temp) | test(net::tor::test) | test(system::test_get_disk_usage) | test(net::ssl::certificate_details_persist) | test(net::ssl::ca_details_persist))'
if: ${{ matrix.target == 'x86_64' }}
- name: Run tests
run: |
docker run --rm --platform linux/arm64/v8 \
-v "/home/runner/.cargo/registry":/usr/local/cargo/registry \
-v "$(pwd)":/home/rust/src \
-e CARGO_TERM_COLOR=${{ env.CARGO_TERM_COLOR }} \
-P ubuntu:20.04 \
sh -c '
apt update &&
apt install -y ca-certificates &&
cd /home/rust/src &&
mkdir -p ~/.cargo/bin &&
tar -zxvf nextest-aarch64.tar.gz -C ${CARGO_HOME:-~/.cargo}/bin &&
${CARGO_HOME:-~/.cargo}/bin/cargo-nextest nextest run --archive-file nextest-archive-${{ matrix.target }}.tar.zst \
--filter-expr "not (test(system::test_get_temp) | test(net::tor::test) | test(system::test_get_disk_usage) | test(net::ssl::certificate_details_persist) | test(net::ssl::ca_details_persist))"'
if: ${{ matrix.target == 'aarch64' }}

45
.github/workflows/frontend.yaml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Frontend
on:
workflow_call:
workflow_dispatch:
env:
NODEJS_VERSION: '16'
jobs:
frontend:
name: Build frontend
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODEJS_VERSION }}
- name: Get npm cache directory
id: npm-cache-dir
run: |
echo "::set-output name=dir::$(npm config get cache)"
- uses: actions/cache@v3
id: npm-cache
with:
path: ${{ steps.npm-cache-dir.outputs.dir }}
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Build frontends
run: make frontends
- name: 'Tar files to preserve file permissions'
run: tar -cvf frontend.tar ENVIRONMENT.txt GIT_HASH.txt frontend/dist frontend/config.json
- uses: actions/upload-artifact@v3
with:
name: frontend
path: frontend.tar

137
.github/workflows/product.yaml vendored Normal file
View File

@@ -0,0 +1,137 @@
name: Build Pipeline
on:
workflow_dispatch:
push:
branches:
- master
- next
pull_request:
branches:
- master
- next
jobs:
compat:
uses: ./.github/workflows/reusable-workflow.yaml
with:
build_command: make system-images/compat/compat.tar
artifact_name: compat.tar
artifact_path: system-images/compat/compat.tar
utils:
uses: ./.github/workflows/reusable-workflow.yaml
with:
build_command: make system-images/utils/utils.tar
artifact_name: utils.tar
artifact_path: system-images/utils/utils.tar
binfmt:
uses: ./.github/workflows/reusable-workflow.yaml
with:
build_command: make system-images/binfmt/binfmt.tar
artifact_name: binfmt.tar
artifact_path: system-images/binfmt/binfmt.tar
nc-broadcast:
uses: ./.github/workflows/reusable-workflow.yaml
with:
build_command: make cargo-deps/aarch64-unknown-linux-gnu/release/nc-broadcast
artifact_name: nc-broadcast.tar
artifact_path: cargo-deps/aarch64-unknown-linux-gnu/release/nc-broadcast
backend:
uses: ./.github/workflows/backend.yaml
frontend:
uses: ./.github/workflows/frontend.yaml
image:
name: Build image
runs-on: ubuntu-latest
timeout-minutes: 60
needs: [compat,utils,binfmt,nc-broadcast,backend,frontend]
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Download compat.tar artifact
uses: actions/download-artifact@v3
with:
name: compat.tar
path: system-images/compat
- name: Download utils.tar artifact
uses: actions/download-artifact@v3
with:
name: utils.tar
path: system-images/utils
- name: Download binfmt.tar artifact
uses: actions/download-artifact@v3
with:
name: binfmt.tar
path: system-images/binfmt
- name: Download nc-broadcast.tar artifact
uses: actions/download-artifact@v3
with:
name: nc-broadcast.tar
path: cargo-deps/aarch64-unknown-linux-gnu/release
- name: Download js_snapshot artifact
uses: actions/download-artifact@v3
with:
name: js_snapshot
path: libs/js_engine/src/artifacts/
- name: Download arm_js_snapshot artifact
uses: actions/download-artifact@v3
with:
name: arm_js_snapshot
path: libs/js_engine/src/artifacts/
- name: Download backend artifact
uses: actions/download-artifact@v3
with:
name: backend-aarch64
- name: 'Extract backend'
run:
tar -mxvf backend-aarch64.tar
- name: Download frontend artifact
uses: actions/download-artifact@v3
with:
name: frontend
- name: Skip frontend build
run: |
mkdir frontend/node_modules
mkdir frontend/dist
mkdir patch-db/client/node_modules
mkdir patch-db/client/dist
- name: 'Extract frontend'
run: |
tar -mxvf frontend.tar frontend/config.json
tar -mxvf frontend.tar frontend/dist
- name: Cache raspiOS
id: cache-raspios
uses: actions/cache@v3
with:
path: raspios.img
key: cache-raspios
- name: Build image
run: "make V=1 NO_KEY=1 eos.img --debug"
- name: Compress image
run: "make gzip"
- uses: actions/upload-artifact@v3
with:
name: image
path: eos.tar.gz

View File

@@ -0,0 +1,34 @@
name: Reusable Workflow
on:
workflow_call:
inputs:
build_command:
required: true
type: string
artifact_name:
required: true
type: string
artifact_path:
required: true
type: string
jobs:
generic_build_job:
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Build image
run: ${{ inputs.build_command }}
- uses: actions/upload-artifact@v3
with:
name: ${{ inputs.artifact_name }}
path: ${{ inputs.artifact_path }}

8
.gitignore vendored
View File

@@ -8,3 +8,11 @@
/product_key.txt /product_key.txt
/*_product_key.txt /*_product_key.txt
.vscode/settings.json .vscode/settings.json
deploy_web.sh
deploy_web.sh
secrets.db
.vscode/
/cargo-deps/**/*
/ENVIRONMENT.txt
/GIT_HASH.txt
/eos.tar.gz

3
.gitmodules vendored
View File

@@ -1,6 +1,3 @@
[submodule "rpc-toolkit"]
path = rpc-toolkit
url = https://github.com/Start9Labs/rpc-toolkit.git
[submodule "patch-db"] [submodule "patch-db"]
path = patch-db path = patch-db
url = https://github.com/Start9Labs/patch-db.git url = https://github.com/Start9Labs/patch-db.git

View File

@@ -19,6 +19,7 @@ All types of contributions are encouraged and valued. See the [Table of Contents
- [I Want To Contribute](#i-want-to-contribute) - [I Want To Contribute](#i-want-to-contribute)
- [Reporting Bugs](#reporting-bugs) - [Reporting Bugs](#reporting-bugs)
- [Suggesting Enhancements](#suggesting-enhancements) - [Suggesting Enhancements](#suggesting-enhancements)
- [Project Structure](#project-structure)
- [Your First Code Contribution](#your-first-code-contribution) - [Your First Code Contribution](#your-first-code-contribution)
- [Setting Up Your Development Environment](#setting-up-your-development-environment) - [Setting Up Your Development Environment](#setting-up-your-development-environment)
- [Building The Image](#building-the-image) - [Building The Image](#building-the-image)
@@ -134,22 +135,24 @@ Enhancement suggestions are tracked as [GitHub issues](https://github.com/Start9
<!-- You might want to create an issue template for enhancement suggestions that can be used as a guide and that defines the structure of the information to be included. If you do so, reference it here in the description. --> <!-- You might want to create an issue template for enhancement suggestions that can be used as a guide and that defines the structure of the information to be included. If you do so, reference it here in the description. -->
### Project Structure ### Project Structure
EmbassyOS is composed of the following components. Please visit the README for each component to understand the dependency requirements and installation instructions.
- [`ui`](ui/README.md) (Typescript Ionic Angular) is the code that is deployed to the browser to provide the user interface for EmbassyOS. embassyOS is composed of the following components. Please visit the README for each component to understand the dependency requirements and installation instructions.
- [`backend`] (backend/README.md) (Rust) is a command line utility, daemon, and software development kit that sets up and manages services and their environments, provides the interface for the ui, manages system state, and provides utilities for packaging services for EmbassyOS. - [`ui`](frontend/README.md) (Typescript Ionic Angular) is the code that is deployed to the browser to provide the user interface for embassyOS.
- [`backend`](backend/README.md) (Rust) is a command line utility, daemon, and software development kit that sets up and manages services and their environments, provides the interface for the ui, manages system state, and provides utilities for packaging services for embassyOS.
- `patch-db` - A diff based data store that is used to synchronize data between the front and backend. - `patch-db` - A diff based data store that is used to synchronize data between the front and backend.
- Notably, `patch-db` has a [client](patch-db/client/README.md) with its own dependency and installation requirements. - Notably, `patch-db` has a [client](https://github.com/Start9Labs/patch-db/tree/master/client) with its own dependency and installation requirements.
- `rpc-toolkit` - A library for generating an rpc server with cli bindings from Rust functions. - `rpc-toolkit` - A library for generating an rpc server with cli bindings from Rust functions.
- `system-images` - (Docker, Rust) A suite of utility Docker images that are preloaded with EmbassyOS to assist with functions relating to services (eg. configuration, backups, health checks). - `system-images` - (Docker, Rust) A suite of utility Docker images that are preloaded with embassyOS to assist with functions relating to services (eg. configuration, backups, health checks).
- [`setup-wizard`] (ui/README.md)- Code for the user interface that is displayed during the setup and recovery process for EmbassyOS. - [`setup-wizard`](frontend/README.md)- Code for the user interface that is displayed during the setup and recovery process for embassyOS.
- [`diagnostic-ui`] (diagnostic-ui/README.md) - Code for the user interface that is displayed when something has gone wrong with starting up EmbassyOS, which provides helpful debugging tools. - [`diagnostic-ui`](frontend/README.md) - Code for the user interface that is displayed when something has gone wrong with starting up embassyOS, which provides helpful debugging tools.
### Your First Code Contribution ### Your First Code Contribution
#### Setting up your development environment #### Setting Up Your Development Environment
First, clone the EmbassyOS repository and from the project root, pull in the submodules for dependent libraries. First, clone the embassyOS repository and from the project root, pull in the submodules for dependent libraries.
``` ```sh
git clone https://github.com/Start9Labs/embassy-os.git git clone https://github.com/Start9Labs/embassy-os.git
git submodule update --init --recursive git submodule update --init --recursive
``` ```
@@ -157,7 +160,7 @@ git submodule update --init --recursive
Depending on which component of the ecosystem you are interested in contributing to, follow the installation requirements listed in that component's README (linked [above](#project-structure)) Depending on which component of the ecosystem you are interested in contributing to, follow the installation requirements listed in that component's README (linked [above](#project-structure))
#### Building The Image #### Building The Image
This step is for setting up an environment in which to test your code changes if you do not yet have a EmbassyOS. This step is for setting up an environment in which to test your code changes if you do not yet have a embassyOS.
- Requirements - Requirements
- `ext4fs` (available if running on the Linux kernel) - `ext4fs` (available if running on the Linux kernel)
@@ -174,7 +177,7 @@ Contributions in the form of setup guides for integrations with external applica
## Styleguides ## Styleguides
### Formatting ### Formatting
Each component of EmbassyOS contains its own style guide. Code must be formatted with the formatter designated for each component. These are outlined within each component folder's README. Each component of embassyOS contains its own style guide. Code must be formatted with the formatter designated for each component. These are outlined within each component folder's README.
### Atomic Commits ### Atomic Commits
Commits [should be atomic](https://en.wikipedia.org/wiki/Atomic_commit#Atomic_commit_convention) and diffs should be easy to read. Commits [should be atomic](https://en.wikipedia.org/wiki/Atomic_commit#Atomic_commit_convention) and diffs should be easy to read.
@@ -188,7 +191,7 @@ The body of a pull request should contain sufficient description of what the cha
You should include references to any relevant [issues](https://github.com/Start9Labs/embassy-os/issues). You should include references to any relevant [issues](https://github.com/Start9Labs/embassy-os/issues).
### Rebasing Changes ### Rebasing Changes
When a pull request conflicts with the target branch, you may be asked to rebase it on top of the current target branch. The git rebase command will take care of rebuilding your commits on top of the new base. When a pull request conflicts with the target branch, you may be asked to rebase it on top of the current target branch. The `git rebase` command will take care of rebuilding your commits on top of the new base.
This project aims to have a clean git history, where code changes are only made in non-merge commits. This simplifies auditability because merge commits can be assumed to not contain arbitrary code changes. This project aims to have a clean git history, where code changes are only made in non-merge commits. This simplifies auditability because merge commits can be assumed to not contain arbitrary code changes.

View File

@@ -1,25 +1,42 @@
# START9 PERSONAL USE LICENSE v1.0 # START9 NON-COMMERCIAL LICENSE v1
Version 1, 22 September 2022
This license governs the use of the accompanying Software. If you use the Software, you accept this license. If you do not accept the license, do not use the Software. TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. **Definitions.** ### 1.Definitions
1. “Licensor” means the copyright owner, Start9 Labs, Inc, or its successor(s) in interest, or a future assignee of the copyright.
2. “Source Code” means the preferred form of the Software for making modifications to it.
3. “Object Code” means any non-source form of the Software, including the machine-language output by a compiler or assembler.
4. “Distribute” means to convey or to publish and generally has the same meaning here as under U.S. Copyright law.
5. “Sell” means practicing any or all of the rights granted to you under the License to provide to third parties, for a fee or other consideration (including without limitation fees for hosting or consulting/support services related to the Software), a product or service whose value derives, entirely or substantially, from the functionality of the Software.
2. **Grant of Rights.** Subject to the terms of this license, the Licensor grants you, the licensee, a non-exclusive, worldwide, royalty-free copyright license to: "License" means version 1 of the Start9 Non-Commercial License.
1. Access, audit, copy, modify, compile, or distribute the Source Code or modifications to the Source Code.
2. Run, test, or otherwise use the Object Code.
3. **Limitations.** "Licensor" means the Start9 Labs, Inc, or its successor(s) in interest, or a future assignee of the copyright.
1. The grant of rights under the License will NOT include, and the License does NOT grant you the right to:
1. Sell the Software or any derivative works based thereon.
2. Distribute the Object Code.
2. If you Distribute the Source Code, or if permission is separately granted to Distribute the Object Code, you expressly undertake not to remove, or modify, in any manner, the copyright notices attached to the Source Code, and displayed in any output of the Object Code when run, and to reproduce these notices, in an identical manner, in any distributed copies of the Software together with a copy of this license. If you Distribute a modified copy of the Software, or a derivative work based thereon, the work must carry prominent notices stating that you modified it, and giving a relevant date.
3. The terms of this license will apply to anyone who comes into possession of a copy of the Software, and any modifications or derivative works based thereon, made by anyone.
4. **Contributions.** You hereby grant to Licensor a perpetual, irrevocable, worldwide, non-exclusive, royalty-free license to use and exploit any modifications or derivative works based on the Source Code of which you are the author. "You" (or "Your") means an individual or organization exercising permissions granted by this License.
5. **Disclaimer.** THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. LICENSOR HAS NO OBLIGATION TO SUPPORT RECIPIENTS OF THE SOFTWARE. "Source Code" for a work means the preferred form of the work for making modifications to it.
"Object Code" means any non-source form of a work, including the machine-language output by a compiler or assembler.
"Work" means any work of authorship, whether in Source or Object form, made available under this License.
"Derivative Work" means any work, whether in Source or Object form, that is based on (or derived from) the Work.
"Distribute" means to convey or to publish and generally has the same meaning here as under U.S. Copyright law.
"Sell" means practicing any or all of the rights granted to you under the License to provide to third parties, for a fee or other consideration (including, without limitation, fees for hosting, consulting, or support services), a product or service whose value derives, entirely or substantially, from the functionality of the Work or Derivative Work.
### 2. Grant of Rights
Subject to the terms of this license, the Licensor grants you, the licensee, a non-exclusive, worldwide, royalty-free copyright license to access, audit, copy, modify, compile, run, test, distribute, or otherwise use the Software.
### 3. Limitations
1. The grant of rights under the License does NOT include, and the License does NOT grant You the right to Sell the Work or Derivative Work.
2. If you Distribute the Work or Derivative Work, you expressly undertake not to remove or modify, in any manner, the copyright notices attached to the Work or displayed in any output of the Work when run, and to reproduce these notices, in an identical manner, in any distributed copies of the Work or Derivative Work together with a copy of this License.
3. If you Distribute a Derivative Work, it must carry prominent notices stating that it has been modified from the Work, providing a relevant date.
### 4. Contributions
You hereby grant to Licensor a perpetual, irrevocable, worldwide, non-exclusive, royalty-free license to use and exploit any Derivative Work of which you are the author.
### 5. Disclaimer
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. LICENSOR HAS NO OBLIGATION TO SUPPORT RECIPIENTS OF THE SOFTWARE.

View File

@@ -1,20 +1,30 @@
EMBASSY_BINS := backend/target/aarch64-unknown-linux-gnu/release/embassyd backend/target/aarch64-unknown-linux-gnu/release/embassy-init backend/target/aarch64-unknown-linux-gnu/release/embassy-cli backend/target/aarch64-unknown-linux-gnu/release/embassy-sdk ARCH = aarch64
ENVIRONMENT_FILE := $(shell ./check-environment.sh)
GIT_HASH_FILE := $(shell ./check-git-hash.sh)
EMBASSY_BINS := backend/target/$(ARCH)-unknown-linux-gnu/release/embassyd backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-init backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-cli backend/target/$(ARCH)-unknown-linux-gnu/release/embassy-sdk backend/target/$(ARCH)-unknown-linux-gnu/release/avahi-alias
EMBASSY_UIS := frontend/dist/ui frontend/dist/setup-wizard frontend/dist/diagnostic-ui EMBASSY_UIS := frontend/dist/ui frontend/dist/setup-wizard frontend/dist/diagnostic-ui
EMBASSY_SRC := raspios.img product_key.txt $(EMBASSY_BINS) backend/embassyd.service backend/embassy-init.service $(EMBASSY_UIS) $(shell find build) EMBASSY_SRC := raspios.img product_key.txt $(EMBASSY_BINS) backend/embassyd.service backend/embassy-init.service $(EMBASSY_UIS) $(shell find build)
COMPAT_SRC := $(shell find system-images/compat/src) COMPAT_SRC := $(shell find system-images/compat/ -not -path 'system-images/compat/target/*' -and -not -name compat.tar -and -not -name target)
UTILS_SRC := $(shell find system-images/utils/Dockerfile) UTILS_SRC := $(shell find system-images/utils/ -not -name utils.tar)
BACKEND_SRC := $(shell find backend/src) $(shell find patch-db/*/src) $(shell find rpc-toolkit/*/src) backend/Cargo.toml backend/Cargo.lock BINFMT_SRC := $(shell find system-images/binfmt/ -not -name binfmt.tar)
FRONTEND_SRC := $(shell find frontend/projects) $(shell find frontend/assets) BACKEND_SRC := $(shell find backend/src) $(shell find backend/migrations) $(shell find patch-db/*/src) backend/Cargo.toml backend/Cargo.lock
PATCH_DB_CLIENT_SRC = $(shell find patch-db/client -not -path patch-db/client/dist) FRONTEND_SHARED_SRC := $(shell find frontend/projects/shared) $(shell find frontend/assets) $(shell ls -p frontend/ | grep -v / | sed 's/^/frontend\//g') frontend/node_modules frontend/config.json patch-db/client/dist frontend/patchdb-ui-seed.json
GIT_REFS := $(shell find .git/refs/heads) FRONTEND_UI_SRC := $(shell find frontend/projects/ui)
TMP_FILE := $(shell mktemp) FRONTEND_SETUP_WIZARD_SRC := $(shell find frontend/projects/setup-wizard)
FRONTEND_DIAGNOSTIC_UI_SRC := $(shell find frontend/projects/diagnostic-ui)
PATCH_DB_CLIENT_SRC := $(shell find patch-db/client -not -path patch-db/client/dist)
GZIP_BIN := $(shell which pigz || which gzip)
$(shell sudo true)
.DELETE_ON_ERROR: .DELETE_ON_ERROR:
.PHONY: all gzip clean format sdk snapshots frontends ui backend
all: eos.img all: eos.img
gzip: eos.img gzip: eos.tar.gz
gzip -k eos.img
eos.tar.gz: eos.img
tar --format=posix -cS -f- eos.img | $(GZIP_BIN) > eos.tar.gz
clean: clean:
rm -f eos.img rm -f eos.img
@@ -27,17 +37,27 @@ clean:
rm -rf frontend/dist rm -rf frontend/dist
rm -rf patch-db/client/node_modules rm -rf patch-db/client/node_modules
rm -rf patch-db/client/dist rm -rf patch-db/client/dist
sudo rm -rf cargo-deps
eos.img: $(EMBASSY_SRC) system-images/compat/compat.tar system-images/utils/utils.tar format:
cd backend && cargo +nightly fmt
cd libs && cargo +nightly fmt
sdk:
cd backend/ && ./install-sdk.sh
eos.img: $(EMBASSY_SRC) system-images/compat/compat.tar system-images/utils/utils.tar system-images/binfmt/binfmt.tar cargo-deps/aarch64-unknown-linux-gnu/release/nc-broadcast $(ENVIRONMENT_FILE) $(GIT_HASH_FILE)
! test -f eos.img || rm eos.img ! test -f eos.img || rm eos.img
if [ "$(NO_KEY)" = "1" ]; then NO_KEY=1 ./build/make-image.sh; else ./build/make-image.sh; fi if [ "$(NO_KEY)" = "1" ]; then NO_KEY=1 ./build/make-image.sh; else ./build/make-image.sh; fi
system-images/compat/compat.tar: $(COMPAT_SRC) system-images/compat/compat.tar: $(COMPAT_SRC)
cd system-images/compat && ./build.sh cd system-images/compat && make
cd system-images/compat && DOCKER_CLI_EXPERIMENTAL=enabled docker buildx build --tag start9/x_system/compat --platform=linux/arm64 -o type=docker,dest=compat.tar .
system-images/utils/utils.tar: $(UTILS_SRC) system-images/utils/utils.tar: $(UTILS_SRC)
cd system-images/utils && DOCKER_CLI_EXPERIMENTAL=enabled docker buildx build --tag start9/x_system/utils --platform=linux/arm64 -o type=docker,dest=utils.tar . cd system-images/utils && make
system-images/binfmt/binfmt.tar: $(BINFMT_SRC)
cd system-images/binfmt && make
raspios.img: raspios.img:
wget --continue https://downloads.raspberrypi.org/raspios_lite_arm64/images/raspios_lite_arm64-2022-01-28/2022-01-28-raspios-bullseye-arm64-lite.zip wget --continue https://downloads.raspberrypi.org/raspios_lite_arm64/images/raspios_lite_arm64-2022-01-28/2022-01-28-raspios-bullseye-arm64-lite.zip
@@ -50,25 +70,53 @@ product_key.txt:
if [ "$(KEY)" != "" ]; then $(shell which echo) -n "$(KEY)" > product_key.txt; fi if [ "$(KEY)" != "" ]; then $(shell which echo) -n "$(KEY)" > product_key.txt; fi
echo >> product_key.txt echo >> product_key.txt
$(EMBASSY_BINS): $(BACKEND_SRC) snapshots: libs/snapshot-creator/Cargo.toml
cd libs/ && ./build-v8-snapshot.sh
cd libs/ && ./build-arm-v8-snapshot.sh
$(EMBASSY_BINS): $(BACKEND_SRC) $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) frontend/patchdb-ui-seed.json
cd backend && ./build-prod.sh cd backend && ./build-prod.sh
touch $(EMBASSY_BINS)
frontend/node_modules: frontend/package.json frontend/node_modules: frontend/package.json
npm --prefix frontend ci npm --prefix frontend ci
$(EMBASSY_UIS): $(FRONTEND_SRC) frontend/node_modules patch-db/client patch-db/client/dist frontend/config.json frontend/dist/ui: $(FRONTEND_UI_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE)
npm --prefix frontend run build:all npm --prefix frontend run build:ui
frontend/config.json: .git/HEAD $(GIT_REFS) frontend/dist/setup-wizard: $(FRONTEND_SETUP_WIZARD_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE)
npm --prefix frontend run build:setup
frontend/dist/diagnostic-ui: $(FRONTEND_DIAGNOSTIC_UI_SRC) $(FRONTEND_SHARED_SRC) $(ENVIRONMENT_FILE)
npm --prefix frontend run build:dui
frontend/config.json: $(GIT_HASH_FILE) frontend/config-sample.json
jq '.useMocks = false' frontend/config-sample.json > frontend/config.json jq '.useMocks = false' frontend/config-sample.json > frontend/config.json
npm --prefix frontend run-script build-config npm --prefix frontend run-script build-config
frontend/patchdb-ui-seed.json: frontend/package.json
jq '."ack-welcome" = "$(shell yq '.version' frontend/package.json)"' frontend/patchdb-ui-seed.json > ui-seed.tmp
mv ui-seed.tmp frontend/patchdb-ui-seed.json
patch-db/client/node_modules: patch-db/client/package.json patch-db/client/node_modules: patch-db/client/package.json
npm --prefix patch-db/client install npm --prefix patch-db/client ci
patch-db/client/dist: $(PATCH_DB_CLIENT_SRC) patch-db/client/node_modules patch-db/client/dist: $(PATCH_DB_CLIENT_SRC) patch-db/client/node_modules
! test -d patch-db/client/dist || rm -rf patch-db/client/dist ! test -d patch-db/client/dist || rm -rf patch-db/client/dist
npm --prefix patch-db/client run build npm --prefix frontend run build:deps
# used by github actions
backend-$(ARCH).tar: $(ENVIRONMENT_FILE) $(GIT_HASH_FILE) $(EMBASSY_BINS)
tar -cvf $@ $^
# this is a convenience step to build all frontends - it is not referenced elsewhere in this file # this is a convenience step to build all frontends - it is not referenced elsewhere in this file
frontend: frontend/node_modules $(EMBASSY_UIS) frontends: $(EMBASSY_UIS)
# this is a convenience step to build the UI
ui: frontend/dist/ui
# used by github actions
backend: $(EMBASSY_BINS)
cargo-deps/aarch64-unknown-linux-gnu/release/nc-broadcast:
./build-cargo-dep.sh nc-broadcast

View File

@@ -1,5 +1,6 @@
# EmbassyOS # embassyOS
[![Version](https://img.shields.io/github/v/tag/Start9Labs/embassy-os?color=success)](https://github.com/Start9Labs/embassy-os/releases) [![version](https://img.shields.io/github/v/tag/Start9Labs/embassy-os?color=success)](https://github.com/Start9Labs/embassy-os/releases)
[![build](https://github.com/Start9Labs/embassy-os/actions/workflows/product.yaml/badge.svg)](https://github.com/Start9Labs/embassy-os/actions/workflows/product.yaml)
[![community](https://img.shields.io/badge/community-matrix-yellow)](https://matrix.to/#/#community:matrix.start9labs.com) [![community](https://img.shields.io/badge/community-matrix-yellow)](https://matrix.to/#/#community:matrix.start9labs.com)
[![community](https://img.shields.io/badge/community-telegram-informational)](https://t.me/start9_labs) [![community](https://img.shields.io/badge/community-telegram-informational)](https://t.me/start9_labs)
[![support](https://img.shields.io/badge/support-docs-important)](https://docs.start9labs.com) [![support](https://img.shields.io/badge/support-docs-important)](https://docs.start9labs.com)
@@ -11,10 +12,10 @@
### _Welcome to the era of Sovereign Computing_ ### ### _Welcome to the era of Sovereign Computing_ ###
EmbassyOS is a browser-based, graphical operating system for a personal server. EmbassyOS facilitates the discovery, installation, network configuration, service configuration, data backup, dependency management, and health monitoring of self-hosted software services. It is the most advanced, secure, reliable, and user friendly personal server OS in the world. embassyOS is a browser-based, graphical operating system for a personal server. embassyOS facilitates the discovery, installation, network configuration, service configuration, data backup, dependency management, and health monitoring of self-hosted software services. It is the most advanced, secure, reliable, and user friendly personal server OS in the world.
## Running EmbassyOS ## Running embassyOS
There are multiple ways to get your hands on EmbassyOS. There are multiple ways to get your hands on embassyOS.
### :moneybag: Buy an Embassy ### :moneybag: Buy an Embassy
This is the most convenient option. Simply [buy an Embassy](https://start9.com) from Start9 and plug it in. Depending on where you live, shipping costs and import duties will vary. This is the most convenient option. Simply [buy an Embassy](https://start9.com) from Start9 and plug it in. Depending on where you live, shipping costs and import duties will vary.
@@ -28,22 +29,23 @@ While not as convenient as buying an Embassy, this option is easier than you mig
To pursue this option, follow this [guide](https://start9.com/latest/diy). To pursue this option, follow this [guide](https://start9.com/latest/diy).
### :hammer_and_wrench: Build EmbassyOS from Source ### :hammer_and_wrench: Build embassyOS from Source
EmbassyOS can be built from source, for personal use, for free. embassyOS can be built from source, for personal use, for free.
A detailed guide for doing so can be found [here](https://github.com/Start9Labs/embassy-os/blob/master/build/README.md). A detailed guide for doing so can be found [here](https://github.com/Start9Labs/embassy-os/blob/master/build/README.md).
## :heart: Contributing ## :heart: Contributing
There are multiple ways to contribute: work directly on EmbassyOS, package a service for the marketplace, or help with documentation and guides. To learn more about contributing, see [here](https://github.com/Start9Labs/embassy-os/blob/master/CONTRIBUTING.md). There are multiple ways to contribute: work directly on embassyOS, package a service for the marketplace, or help with documentation and guides. To learn more about contributing, see [here](https://github.com/Start9Labs/embassy-os/blob/master/CONTRIBUTING.md).
## UI Screenshots ## UI Screenshots
<p align="center"> <p align="center">
<img src="assets/EmbassyOS.png" alt="EmbassyOS" width="65%"> <img src="assets/embassyOS.png" alt="embassyOS" width="85%">
</p> </p>
<p align="center"> <p align="center">
<img src="assets/eos-services.png" alt="Embassy Services" width="45%"> <img src="assets/eOS-preferences.png" alt="Embassy Preferences" width="49%">
<img src="assets/eos-preferences.png" alt="Embassy Preferences" width="45%"> <img src="assets/eOS-ghost.png" alt="Embassy Ghost Service" width="49%">
</p> </p>
<p align="center"> <p align="center">
<img src="assets/eos-bitcoind-health-check.png" alt="Embassy Bitcoin Health Checks" width="45%"> <img src="assets/eos-logs.png" alt="Embassy Logs" width="45%"> <img src="assets/eOS-synapse-health-check.png" alt="Embassy Synapse Health Checks" width="49%">
<img src="assets/eOS-sideload.png" alt="Embassy Sideload Service" width="49%">
</p> </p>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 285 KiB

BIN
assets/eOS-ghost.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 281 KiB

BIN
assets/eOS-preferences.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 266 KiB

BIN
assets/eOS-sideload.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB

BIN
assets/embassyOS.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 334 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 347 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 599 KiB

2814
backend/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,7 +14,7 @@ keywords = [
name = "embassy-os" name = "embassy-os"
readme = "README.md" readme = "README.md"
repository = "https://github.com/Start9Labs/embassy-os" repository = "https://github.com/Start9Labs/embassy-os"
version = "0.3.0-rev.3" version = "0.3.2"
[lib] [lib]
name = "embassy" name = "embassy"
@@ -36,104 +36,123 @@ path = "src/bin/embassy-sdk.rs"
name = "embassy-cli" name = "embassy-cli"
path = "src/bin/embassy-cli.rs" path = "src/bin/embassy-cli.rs"
[[bin]]
name = "avahi-alias"
path = "src/bin/avahi-alias.rs"
[features] [features]
avahi = ["avahi-sys"] avahi = ["avahi-sys"]
beta = [] default = ["avahi", "sound", "metal", "js_engine"]
default = ["avahi", "sound", "metal"] dev = []
metal = [] metal = []
sound = [] sound = []
unstable = ["patch-db/unstable"] unstable = ["patch-db/unstable"]
[dependencies] [dependencies]
aes = { version = "0.7.5", features = ["ctr"] } aes = { version = "0.7.5", features = ["ctr"] }
async-trait = "0.1.51" async-stream = "0.3.3"
async-trait = "0.1.56"
avahi-sys = { git = "https://github.com/Start9Labs/avahi-sys", version = "0.10.0", branch = "feature/dynamic-linking", features = [ avahi-sys = { git = "https://github.com/Start9Labs/avahi-sys", version = "0.10.0", branch = "feature/dynamic-linking", features = [
"dynamic", "dynamic",
], optional = true } ], optional = true }
base32 = "0.4.0" base32 = "0.4.0"
base64 = "0.13.0" base64 = "0.13.0"
base64ct = "1.5.1"
basic-cookies = "0.1.4" basic-cookies = "0.1.4"
bollard = "0.11.0" bollard = "0.13.0"
chrono = { version = "0.4.19", features = ["serde"] } chrono = { version = "0.4.19", features = ["serde"] }
clap = "2.33" clap = "3.2.8"
color-eyre = "0.5" color-eyre = "0.6.1"
cookie_store = "0.15.0" cookie_store = "0.16.1"
digest = "0.9.0" current_platform = "0.2.0"
digest = "0.10.3"
digest-old = { package = "digest", version = "0.9.0" }
divrem = "1.0.0" divrem = "1.0.0"
ed25519 = { version = "1.5.2", features = ["pkcs8", "pem", "alloc"] }
ed25519-dalek = { version = "1.0.1", features = ["serde"] } ed25519-dalek = { version = "1.0.1", features = ["serde"] }
emver = { version = "0.1.6", features = ["serde"] } emver = { version = "0.1.6", features = ["serde"] }
fd-lock-rs = "0.1.3" fd-lock-rs = "0.1.4"
futures = "0.3.17" futures = "0.3.21"
git-version = "0.3.5" git-version = "0.3.5"
helpers = { path = "../libs/helpers" }
hex = "0.4.3" hex = "0.4.3"
hmac = "0.11.0" hmac = "0.12.1"
http = "0.2.5" http = "0.2.8"
hyper = "0.14.13" hyper = "0.14.20"
hyper-ws-listener = { git = "https://github.com/Start9Labs/hyper-ws-listener.git", branch = "main" } hyper-ws-listener = "0.2.0"
imbl = "1.0.1" imbl = "2.0.0"
indexmap = { version = "1.7.0", features = ["serde"] } indexmap = { version = "1.9.1", features = ["serde"] }
isocountry = "0.3.2" isocountry = "0.3.2"
itertools = "0.10.1" itertools = "0.10.3"
josekit = "0.8.1"
js_engine = { path = '../libs/js_engine', optional = true }
jsonpath_lib = "0.3.0" jsonpath_lib = "0.3.0"
lazy_static = "1.4" lazy_static = "1.4.0"
libc = "0.2.103" libc = "0.2.126"
log = "0.4.14" log = "0.4.17"
nix = "0.23.0" models = { version = "*", path = "../libs/models" }
nom = "7.0.0" nix = "0.25.0"
nom = "7.1.1"
num = "0.4.0" num = "0.4.0"
num_enum = "0.5.4" num_enum = "0.5.7"
openssh-keys = "0.5.0" openssh-keys = "0.5.0"
openssl = { version = "0.10.36", features = ["vendored"] } openssl = { version = "0.10.41", features = ["vendored"] }
patch-db = { version = "*", path = "../patch-db/patch-db", features = [ patch-db = { version = "*", path = "../patch-db/patch-db", features = [
"trace", "trace",
] } ] }
pbkdf2 = "0.9.0" pbkdf2 = "0.11.0"
pin-project = "1.0.8" pin-project = "1.0.11"
platforms = "1.1.0" pkcs8 = { version = "0.9.0", features = ["std"] }
prettytable-rs = "0.8.0" prettytable-rs = "0.9.0"
proptest = "1.0.0" proptest = "1.0.0"
proptest-derive = "0.3.0" proptest-derive = "0.3.0"
rand = "0.7.3" rand = { version = "0.8.5", features = ["std"] }
regex = "1.5.4" rand-old = { package = "rand", version = "0.7.3" }
reqwest = { version = "0.11.4", features = ["stream", "json", "socks"] } regex = "1.6.0"
reqwest_cookie_store = "0.2.0" reqwest = { version = "0.11.11", features = ["stream", "json", "socks"] }
rpassword = "5.0.1" reqwest_cookie_store = "0.4.0"
rpc-toolkit = { version = "*", path = "../rpc-toolkit/rpc-toolkit" } rpassword = "7.0.0"
rust-argon2 = "0.8.3" rpc-toolkit = "0.2.1"
rust-argon2 = "1.0.0"
scopeguard = "1.1" # because avahi-sys fucks your shit up scopeguard = "1.1" # because avahi-sys fucks your shit up
serde = { version = "1.0.130", features = ["derive", "rc"] } serde = { version = "1.0.139", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.0" } serde_cbor = { package = "ciborium", version = "0.2.0" }
serde_json = "1.0.68" serde_json = "1.0.82"
serde_toml = { package = "toml", version = "0.5.8" } serde_toml = { package = "toml", version = "0.5.9" }
serde_yaml = "0.8.21" serde_with = { version = "2.0.1", features = ["macros", "json"] }
sha2 = "0.9.8" serde_yaml = "0.9.11"
simple-logging = "2.0" sha2 = "0.10.2"
sqlx = { version = "0.5.11", features = [ sha2-old = { package = "sha2", version = "0.9.9" }
simple-logging = "2.0.2"
sqlx = { version = "0.6.0", features = [
"chrono", "chrono",
"offline", "offline",
"runtime-tokio-rustls", "runtime-tokio-rustls",
"sqlite", "postgres",
] } ] }
stderrlog = "0.5.1" stderrlog = "0.5.3"
tar = "0.4.37" tar = "0.4.38"
thiserror = "1.0.29" thiserror = "1.0.31"
tokio = { version = "1.15.0", features = ["full"] } tokio = { version = "1.19.2", features = ["full"] }
tokio-compat-02 = "0.2.0" tokio-stream = { version = "0.1.9", features = ["io-util", "sync"] }
tokio-stream = { version = "0.1.7", features = ["io-util", "sync"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" } tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = "0.14.0" tokio-tungstenite = { version = "0.17.1", features = ["native-tls"] }
tokio-util = { version = "0.6.8", features = ["io"] } tokio-util = { version = "0.7.3", features = ["io"] }
torut = "0.2.0" torut = "0.2.1"
tracing = "0.1" tracing = "0.1.35"
tracing-error = "0.1" tracing-error = "0.2.0"
tracing-futures = "0.2" tracing-futures = "0.2.5"
tracing-subscriber = "0.2" tracing-subscriber = { version = "0.3.14", features = ["env-filter"] }
typed-builder = "0.9.1" trust-dns-server = "0.22.0"
typed-builder = "0.10.0"
url = { version = "2.2.2", features = ["serde"] } url = { version = "2.2.2", features = ["serde"] }
uuid = { version = "1.1.2", features = ["v4"] }
[dependencies.serde_with] [profile.test]
features = ["macros", "json"] opt-level = 3
version = "1.10.0"
[profile.dev.package.backtrace] [profile.dev.package.backtrace]
opt-level = 3 opt-level = 3
[profile.dev.package.sqlx-macros]
opt-level = 3

View File

@@ -1,4 +1,4 @@
# EmbassyOS Backend # embassyOS Backend
- Requirements: - Requirements:
- [Install Rust](https://rustup.rs) - [Install Rust](https://rustup.rs)
@@ -12,9 +12,9 @@
## Structure ## Structure
The EmbassyOS backend is broken up into 4 different binaries: The embassyOS backend is broken up into 4 different binaries:
- embassyd: This is the main workhorse of EmbassyOS - any new functionality you want will likely go here - embassyd: This is the main workhorse of embassyOS - any new functionality you want will likely go here
- embassy-init: This is the component responsible for allowing you to set up your device, and handles system initialization on startup - embassy-init: This is the component responsible for allowing you to set up your device, and handles system initialization on startup
- embassy-cli: This is a CLI tool that will allow you to issue commands to embassyd and control it similarly to the UI - embassy-cli: This is a CLI tool that will allow you to issue commands to embassyd and control it similarly to the UI
- embassy-sdk: This is a CLI tool that aids in building and packaging services you wish to deploy to the Embassy - embassy-sdk: This is a CLI tool that aids in building and packaging services you wish to deploy to the Embassy
@@ -25,7 +25,7 @@ See [here](/backend/Cargo.toml) for details.
## Building ## Building
You can build the entire operating system image using `make` from the root of the EmbassyOS project. This will subsequently invoke the build scripts above to actually create the requisite binaries and put them onto the final operating system image. You can build the entire operating system image using `make` from the root of the embassyOS project. This will subsequently invoke the build scripts above to actually create the requisite binaries and put them onto the final operating system image.
## Questions ## Questions

View File

@@ -8,9 +8,17 @@ if [ "$0" != "./build-dev.sh" ]; then
exit 1 exit 1
fi fi
alias 'rust-arm64-builder'='docker run --rm -it -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-arm-cross:aarch64' USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-arm64-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-arm-cross:aarch64'
cd .. cd ..
rust-arm64-builder sh -c "(cd backend && cargo build)" rust-arm64-builder sh -c "(cd backend && cargo build --locked)"
cd backend cd backend
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo
#rust-arm64-builder aarch64-linux-gnu-strip target/aarch64-unknown-linux-gnu/release/embassyd #rust-arm64-builder aarch64-linux-gnu-strip target/aarch64-unknown-linux-gnu/release/embassyd

View File

@@ -8,8 +8,16 @@ if [ "$0" != "./build-portable-dev.sh" ]; then
exit 1 exit 1
fi fi
alias 'rust-musl-builder'='docker run --rm -it -v "$HOME"/.cargo/registry:/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-musl-cross:x86_64-musl' USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -v "$HOME"/.cargo/registry:/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-musl-cross:x86_64-musl'
cd .. cd ..
rust-musl-builder sh -c "(cd backend && cargo +beta build --target=x86_64-unknown-linux-musl --no-default-features)" rust-musl-builder sh -c "(cd backend && cargo +beta build --target=x86_64-unknown-linux-musl --no-default-features --locked)"
cd backend cd backend
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo

View File

@@ -8,8 +8,16 @@ if [ "$0" != "./build-portable.sh" ]; then
exit 1 exit 1
fi fi
alias 'rust-musl-builder'='docker run --rm -it -v "$HOME"/.cargo/registry:/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-musl-cross:x86_64-musl' USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -v "$HOME"/.cargo/registry:/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-musl-cross:x86_64-musl'
cd .. cd ..
rust-musl-builder sh -c "(cd backend && cargo +beta build --release --target=x86_64-unknown-linux-musl --no-default-features)" rust-musl-builder sh -c "(cd backend && cargo +beta build --release --target=x86_64-unknown-linux-musl --no-default-features --locked)"
cd backend cd backend
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo

View File

@@ -8,21 +8,30 @@ if [ "$0" != "./build-prod.sh" ]; then
exit 1 exit 1
fi fi
alias 'rust-arm64-builder'='docker run --rm -it -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src start9/rust-arm-cross:aarch64' USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-arm64-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P start9/rust-arm-cross:aarch64'
cd .. cd ..
FLAGS=""
if [[ "$ENVIRONMENT" =~ (^|-)unstable($|-) ]]; then if [[ "$ENVIRONMENT" =~ (^|-)unstable($|-) ]]; then
if [[ "$ENVIRONMENT" =~ (^|-)beta($|-) ]]; then FLAGS="unstable,$FLAGS"
rust-arm64-builder sh -c "(cd backend && cargo build --release --features beta,unstable)" fi
else if [[ "$ENVIRONMENT" =~ (^|-)dev($|-) ]]; then
rust-arm64-builder sh -c "(cd backend && cargo build --release --features unstable)" FLAGS="dev,$FLAGS"
fi fi
if [[ "$FLAGS" = "" ]]; then
rust-arm64-builder sh -c "(git config --global --add safe.directory '*'; cd backend && cargo build --release --locked)"
else else
if [[ "$ENVIRONMENT" =~ (^|-)beta($|-) ]]; then echo "FLAGS=$FLAGS"
rust-arm64-builder sh -c "(cd backend && cargo build --release --features beta)" rust-arm64-builder sh -c "(git config --global --add safe.directory '*'; cd backend && cargo build --release --features $FLAGS --locked)"
else
rust-arm64-builder sh -c "(cd backend && cargo build --release)"
fi
fi fi
cd backend cd backend
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo
#rust-arm64-builder aarch64-linux-gnu-strip target/aarch64-unknown-linux-gnu/release/embassyd #rust-arm64-builder aarch64-linux-gnu-strip target/aarch64-unknown-linux-gnu/release/embassyd

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# Enter the backend directory, copy over the built EmbassyOS binaries and systemd services, edit the nginx config, then create the .ssh directory
cp target/aarch64-unknown-linux-gnu/release/embassy-init /mnt/usr/local/bin
cp target/aarch64-unknown-linux-gnu/release/embassyd /mnt/usr/local/bin
cp target/aarch64-unknown-linux-gnu/release/embassy-cli /mnt/usr/local/bin
cp *.service /mnt/etc/systemd/system/
echo "application/wasm wasm;" | sudo tee -a "/mnt/etc/nginx/mime.types"
mkdir -p /mnt/root/.ssh

View File

@@ -6,9 +6,10 @@ Wants=avahi-daemon.service nginx.service tor.service
[Service] [Service]
Type=oneshot Type=oneshot
Environment=RUST_LOG=embassy_init=debug,embassy=debug Environment=RUST_LOG=embassy_init=debug,embassy=debug,js_engine=debug,patch_db=warn
ExecStart=/usr/local/bin/embassy-init ExecStart=/usr/local/bin/embassy-init
RemainAfterExit=true RemainAfterExit=true
StandardOutput=append:/var/log/embassy-init.log
[Install] [Install]
WantedBy=embassyd.service WantedBy=embassyd.service

View File

@@ -5,7 +5,7 @@ Requires=embassy-init.service
[Service] [Service]
Type=simple Type=simple
Environment=RUST_LOG=embassyd=debug,embassy=debug Environment=RUST_LOG=embassyd=debug,embassy=debug,js_engine=debug,patch_db=warn
ExecStart=/usr/local/bin/embassyd ExecStart=/usr/local/bin/embassyd
Restart=always Restart=always
RestartSec=3 RestartSec=3

View File

@@ -8,4 +8,4 @@ if [ "$0" != "./install-sdk.sh" ]; then
exit 1 exit 1
fi fi
cargo install --bin=embassy-sdk --path=. --no-default-features cargo install --bin=embassy-sdk --bin=embassy-cli --path=. --no-default-features --features=js_engine --locked

View File

@@ -1,45 +1,47 @@
-- Add migration script here -- Add migration script here
CREATE TABLE IF NOT EXISTS tor CREATE TABLE IF NOT EXISTS tor (
( package TEXT NOT NULL,
package TEXT NOT NULL, interface TEXT NOT NULL,
interface TEXT NOT NULL, key BYTEA NOT NULL CHECK (length(key) = 64),
key BLOB NOT NULL CHECK (length(key) = 64),
PRIMARY KEY (package, interface) PRIMARY KEY (package, interface)
); );
CREATE TABLE IF NOT EXISTS session
( CREATE TABLE IF NOT EXISTS session (
id TEXT NOT NULL PRIMARY KEY, id TEXT NOT NULL PRIMARY KEY,
logged_in TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, logged_in TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
logged_out TIMESTAMP, logged_out TIMESTAMP,
last_active TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, last_active TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
user_agent TEXT, user_agent TEXT,
metadata TEXT NOT NULL DEFAULT 'null' metadata TEXT NOT NULL DEFAULT 'null'
); );
CREATE TABLE IF NOT EXISTS account
( CREATE TABLE IF NOT EXISTS account (
id INTEGER PRIMARY KEY CHECK (id = 0), id SERIAL PRIMARY KEY CHECK (id = 0),
password TEXT NOT NULL, password TEXT NOT NULL,
tor_key BLOB NOT NULL CHECK (length(tor_key) = 64) tor_key BYTEA NOT NULL CHECK (length(tor_key) = 64)
); );
CREATE TABLE IF NOT EXISTS ssh_keys
( CREATE TABLE IF NOT EXISTS ssh_keys (
fingerprint TEXT NOT NULL, fingerprint TEXT NOT NULL,
openssh_pubkey TEXT NOT NULL, openssh_pubkey TEXT NOT NULL,
created_at TEXT NOT NULL, created_at TEXT NOT NULL,
PRIMARY KEY (fingerprint) PRIMARY KEY (fingerprint)
); );
CREATE TABLE IF NOT EXISTS certificates
( CREATE TABLE IF NOT EXISTS certificates (
id INTEGER PRIMARY KEY, -- Root = 0, Int = 1, Other = 2.. id SERIAL PRIMARY KEY,
-- Root = 0, Int = 1, Other = 2..
priv_key_pem TEXT NOT NULL, priv_key_pem TEXT NOT NULL,
certificate_pem TEXT NOT NULL, certificate_pem TEXT NOT NULL,
lookup_string TEXT UNIQUE, lookup_string TEXT UNIQUE,
created_at TEXT, created_at TEXT,
updated_at TEXT updated_at TEXT
); );
CREATE TABLE IF NOT EXISTS notifications
( ALTER SEQUENCE certificates_id_seq START 2 RESTART 2;
id INTEGER PRIMARY KEY,
CREATE TABLE IF NOT EXISTS notifications (
id SERIAL PRIMARY KEY,
package_id TEXT, package_id TEXT,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
code INTEGER NOT NULL, code INTEGER NOT NULL,
@@ -48,9 +50,9 @@ CREATE TABLE IF NOT EXISTS notifications
message TEXT NOT NULL, message TEXT NOT NULL,
data TEXT data TEXT
); );
CREATE TABLE IF NOT EXISTS cifs_shares
( CREATE TABLE IF NOT EXISTS cifs_shares (
id INTEGER PRIMARY KEY, id SERIAL PRIMARY KEY,
hostname TEXT NOT NULL, hostname TEXT NOT NULL,
path TEXT NOT NULL, path TEXT NOT NULL,
username TEXT NOT NULL, username TEXT NOT NULL,

File diff suppressed because it is too large Load Diff

View File

@@ -1,76 +1,22 @@
use std::collections::{BTreeMap, BTreeSet}; use std::collections::{BTreeMap, BTreeSet};
use std::path::Path;
use std::str::FromStr;
use std::time::Duration;
use clap::ArgMatches; use clap::ArgMatches;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use indexmap::IndexSet; use indexmap::IndexSet;
use patch_db::HasModel; pub use models::ActionId;
use rpc_toolkit::command; use rpc_toolkit::command;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tracing::instrument; use tracing::instrument;
use self::docker::DockerAction;
use crate::config::{Config, ConfigSpec}; use crate::config::{Config, ConfigSpec};
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::id::{Id, ImageId, InvalidId}; use crate::id::ImageId;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat}; use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
use crate::util::Version; use crate::util::Version;
use crate::volume::Volumes; use crate::volume::Volumes;
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
pub mod docker;
// TODO: create RPC endpoint that looks up the appropriate action and calls `execute`
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize)]
pub struct ActionId<S: AsRef<str> = String>(Id<S>);
impl FromStr for ActionId {
type Err = InvalidId;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(ActionId(Id::try_from(s.to_owned())?))
}
}
impl From<ActionId> for String {
fn from(value: ActionId) -> Self {
value.0.into()
}
}
impl<S: AsRef<str>> AsRef<ActionId<S>> for ActionId<S> {
fn as_ref(&self) -> &ActionId<S> {
self
}
}
impl<S: AsRef<str>> std::fmt::Display for ActionId<S> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", &self.0)
}
}
impl<S: AsRef<str>> AsRef<str> for ActionId<S> {
fn as_ref(&self) -> &str {
self.0.as_ref()
}
}
impl<S: AsRef<str>> AsRef<Path> for ActionId<S> {
fn as_ref(&self) -> &Path {
self.0.as_ref().as_ref()
}
}
impl<'de, S> Deserialize<'de> for ActionId<S>
where
S: AsRef<str>,
Id<S>: Deserialize<'de>,
{
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::de::Deserializer<'de>,
{
Ok(ActionId(Deserialize::deserialize(deserializer)?))
}
}
#[derive(Clone, Debug, Default, Deserialize, Serialize)] #[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct Actions(pub BTreeMap<ActionId, Action>); pub struct Actions(pub BTreeMap<ActionId, Action>);
@@ -103,16 +49,21 @@ pub struct Action {
pub description: String, pub description: String,
#[serde(default)] #[serde(default)]
pub warning: Option<String>, pub warning: Option<String>,
pub implementation: ActionImplementation, pub implementation: PackageProcedure,
pub allowed_statuses: IndexSet<DockerStatus>, pub allowed_statuses: IndexSet<DockerStatus>,
#[serde(default)] #[serde(default)]
pub input_spec: ConfigSpec, pub input_spec: ConfigSpec,
} }
impl Action { impl Action {
#[instrument] #[instrument]
pub fn validate(&self, volumes: &Volumes, image_ids: &BTreeSet<ImageId>) -> Result<(), Error> { pub fn validate(
&self,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.implementation self.implementation
.validate(volumes, image_ids, true) .validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| { .with_ctx(|_| {
( (
crate::ErrorKind::ValidateS9pk, crate::ErrorKind::ValidateS9pk,
@@ -141,7 +92,7 @@ impl Action {
ctx, ctx,
pkg_id, pkg_id,
pkg_version, pkg_version,
Some(&format!("{}Action", action_id)), ProcedureName::Action(action_id.clone()),
volumes, volumes,
input, input,
true, true,
@@ -152,77 +103,7 @@ impl Action {
} }
} }
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)] fn display_action_result(action_result: ActionResult, matches: &ArgMatches) {
#[serde(rename_all = "kebab-case")]
#[serde(tag = "type")]
pub enum ActionImplementation {
Docker(DockerAction),
}
impl ActionImplementation {
#[instrument]
pub fn validate(
&self,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
expected_io: bool,
) -> Result<(), color_eyre::eyre::Report> {
match self {
ActionImplementation::Docker(action) => {
action.validate(volumes, image_ids, expected_io)
}
}
}
#[instrument(skip(ctx, input))]
pub async fn execute<I: Serialize, O: for<'de> Deserialize<'de>>(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
name: Option<&str>,
volumes: &Volumes,
input: Option<I>,
allow_inject: bool,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error> {
match self {
ActionImplementation::Docker(action) => {
action
.execute(
ctx,
pkg_id,
pkg_version,
name,
volumes,
input,
allow_inject,
timeout,
)
.await
}
}
}
#[instrument(skip(ctx, input))]
pub async fn sandboxed<I: Serialize, O: for<'de> Deserialize<'de>>(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
volumes: &Volumes,
input: Option<I>,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error> {
match self {
ActionImplementation::Docker(action) => {
action
.sandboxed(ctx, pkg_id, pkg_version, volumes, input, timeout)
.await
}
}
}
}
fn display_action_result(action_result: ActionResult, matches: &ArgMatches<'_>) {
if matches.is_present("format") { if matches.is_present("format") {
return display_serializable(action_result, matches); return display_serializable(action_result, matches);
} }
@@ -278,13 +159,3 @@ pub async fn action(
)) ))
} }
} }
pub struct NoOutput;
impl<'de> Deserialize<'de> for NoOutput {
fn deserialize<D>(_: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
Ok(NoOutput)
}
}

View File

@@ -1,354 +0,0 @@
use std::borrow::Cow;
use std::collections::{BTreeMap, BTreeSet};
use std::ffi::{OsStr, OsString};
use std::net::Ipv4Addr;
use std::path::PathBuf;
use std::time::Duration;
use bollard::container::RemoveContainerOptions;
use futures::future::Either as EitherFuture;
use nix::sys::signal;
use nix::unistd::Pid;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use tracing::instrument;
use crate::context::RpcContext;
use crate::id::{Id, ImageId};
use crate::s9pk::manifest::{PackageId, SYSTEM_PACKAGE_ID};
use crate::util::serde::{Duration as SerdeDuration, IoFormat};
use crate::util::Version;
use crate::volume::{VolumeId, Volumes};
use crate::{Error, ResultExt, HOST_IP};
pub const NET_TLD: &str = "embassy";
lazy_static::lazy_static! {
pub static ref SYSTEM_IMAGES: BTreeSet<ImageId> = {
let mut set = BTreeSet::new();
set.insert("compat".parse().unwrap());
set.insert("utils".parse().unwrap());
set
};
}
#[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct DockerAction {
pub image: ImageId,
#[serde(default)]
pub system: bool,
pub entrypoint: String,
#[serde(default)]
pub args: Vec<String>,
#[serde(default)]
pub mounts: BTreeMap<VolumeId, PathBuf>,
#[serde(default)]
pub io_format: Option<IoFormat>,
#[serde(default)]
pub inject: bool,
#[serde(default)]
pub shm_size_mb: Option<usize>, // TODO: use postfix sizing? like 1k vs 1m vs 1g
#[serde(default)]
pub sigterm_timeout: Option<SerdeDuration>,
}
impl DockerAction {
pub fn validate(
&self,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
expected_io: bool,
) -> Result<(), color_eyre::eyre::Report> {
for (volume, _) in &self.mounts {
if !volumes.contains_key(volume) && !matches!(&volume, &VolumeId::Backup) {
color_eyre::eyre::bail!("unknown volume: {}", volume);
}
}
if self.system {
if !SYSTEM_IMAGES.contains(&self.image) {
color_eyre::eyre::bail!("unknown system image: {}", self.image);
}
} else {
if !image_ids.contains(&self.image) {
color_eyre::eyre::bail!("image for {} not contained in package", self.image);
}
}
if expected_io && self.io_format.is_none() {
color_eyre::eyre::bail!("expected io-format");
}
Ok(())
}
#[instrument(skip(ctx, input))]
pub async fn execute<I: Serialize, O: for<'de> Deserialize<'de>>(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
name: Option<&str>,
volumes: &Volumes,
input: Option<I>,
allow_inject: bool,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error> {
let mut cmd = tokio::process::Command::new("docker");
if self.inject && allow_inject {
cmd.arg("exec");
} else {
let container_name = Self::container_name(pkg_id, name);
cmd.arg("run")
.arg("--rm")
.arg("--network=start9")
.arg(format!("--add-host=embassy:{}", Ipv4Addr::from(HOST_IP)))
.arg("--name")
.arg(&container_name)
.arg(format!("--hostname={}", &container_name))
.arg("--no-healthcheck");
match ctx
.docker
.remove_container(
&container_name,
Some(RemoveContainerOptions {
v: false,
force: true,
link: false,
}),
)
.await
{
Ok(()) | Err(bollard::errors::Error::DockerResponseNotFoundError { .. }) => Ok(()),
Err(e) => Err(e),
}?;
}
cmd.args(
self.docker_args(ctx, pkg_id, pkg_version, volumes, allow_inject)
.await,
);
let input_buf = if let (Some(input), Some(format)) = (&input, &self.io_format) {
cmd.stdin(std::process::Stdio::piped());
Some(format.to_vec(input)?)
} else {
None
};
cmd.stdout(std::process::Stdio::piped());
cmd.stderr(std::process::Stdio::piped());
tracing::trace!(
"{}",
format!("{:?}", cmd)
.split(r#"" ""#)
.collect::<Vec<&str>>()
.join(" ")
);
let mut handle = cmd.spawn().with_kind(crate::ErrorKind::Docker)?;
let id = handle.id();
let timeout_fut = if let Some(timeout) = timeout {
EitherFuture::Right(async move {
tokio::time::sleep(timeout).await;
Ok(())
})
} else {
EitherFuture::Left(futures::future::pending::<Result<_, Error>>())
};
if let (Some(input), Some(mut stdin)) = (&input_buf, handle.stdin.take()) {
use tokio::io::AsyncWriteExt;
stdin
.write_all(input)
.await
.with_kind(crate::ErrorKind::Docker)?;
stdin.flush().await?;
stdin.shutdown().await?;
drop(stdin);
}
enum Race<T> {
Done(T),
TimedOut,
}
let res = tokio::select! {
res = handle.wait_with_output() => Race::Done(res.with_kind(crate::ErrorKind::Docker)?),
res = timeout_fut => {
res?;
Race::TimedOut
},
};
let res = match res {
Race::Done(x) => x,
Race::TimedOut => {
if let Some(id) = id {
signal::kill(Pid::from_raw(id as i32), signal::SIGKILL)
.with_kind(crate::ErrorKind::Docker)?;
}
return Ok(Err((143, "Timed out. Retrying soon...".to_owned())));
}
};
Ok(if res.status.success() || res.status.code() == Some(143) {
Ok(if let Some(format) = self.io_format {
match format.from_slice(&res.stdout) {
Ok(a) => a,
Err(e) => {
tracing::warn!(
"Failed to deserialize stdout from {}: {}, falling back to UTF-8 string.",
format,
e
);
serde_json::from_value(String::from_utf8(res.stdout)?.into())
.with_kind(crate::ErrorKind::Deserialization)?
}
}
} else if res.stdout.is_empty() {
serde_json::from_value(Value::Null).with_kind(crate::ErrorKind::Deserialization)?
} else {
serde_json::from_value(String::from_utf8(res.stdout)?.into())
.with_kind(crate::ErrorKind::Deserialization)?
})
} else {
Err((
res.status.code().unwrap_or_default(),
String::from_utf8(res.stderr)?,
))
})
}
#[instrument(skip(ctx, input))]
pub async fn sandboxed<I: Serialize, O: for<'de> Deserialize<'de>>(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
volumes: &Volumes,
input: Option<I>,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error> {
let mut cmd = tokio::process::Command::new("docker");
cmd.arg("run").arg("--rm").arg("--network=none");
cmd.args(
self.docker_args(ctx, pkg_id, pkg_version, &volumes.to_readonly(), false)
.await,
);
let input_buf = if let (Some(input), Some(format)) = (&input, &self.io_format) {
cmd.stdin(std::process::Stdio::piped());
Some(format.to_vec(input)?)
} else {
None
};
cmd.stdout(std::process::Stdio::piped());
cmd.stderr(std::process::Stdio::piped());
let mut handle = cmd.spawn().with_kind(crate::ErrorKind::Docker)?;
if let (Some(input), Some(stdin)) = (&input_buf, &mut handle.stdin) {
use tokio::io::AsyncWriteExt;
stdin
.write_all(input)
.await
.with_kind(crate::ErrorKind::Docker)?;
}
let res = handle
.wait_with_output()
.await
.with_kind(crate::ErrorKind::Docker)?;
Ok(if res.status.success() || res.status.code() == Some(143) {
Ok(if let Some(format) = &self.io_format {
match format.from_slice(&res.stdout) {
Ok(a) => a,
Err(e) => {
tracing::warn!(
"Failed to deserialize stdout from {}: {}, falling back to UTF-8 string.",
format,
e
);
serde_json::from_value(String::from_utf8(res.stdout)?.into())
.with_kind(crate::ErrorKind::Deserialization)?
}
}
} else if res.stdout.is_empty() {
serde_json::from_value(Value::Null).with_kind(crate::ErrorKind::Deserialization)?
} else {
serde_json::from_value(String::from_utf8(res.stdout)?.into())
.with_kind(crate::ErrorKind::Deserialization)?
})
} else {
Err((
res.status.code().unwrap_or_default(),
String::from_utf8(res.stderr)?,
))
})
}
pub fn container_name(pkg_id: &PackageId, name: Option<&str>) -> String {
if let Some(name) = name {
format!("{}_{}.{}", pkg_id, name, NET_TLD)
} else {
format!("{}.{}", pkg_id, NET_TLD)
}
}
pub fn uncontainer_name(name: &str) -> Option<(PackageId<&str>, Option<&str>)> {
let (pre_tld, _) = name.split_once(".")?;
if pre_tld.contains('_') {
let (pkg, name) = name.split_once("_")?;
Some((Id::try_from(pkg).ok()?.into(), Some(name)))
} else {
Some((Id::try_from(pre_tld).ok()?.into(), None))
}
}
async fn docker_args(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
volumes: &Volumes,
allow_inject: bool,
) -> Vec<Cow<'_, OsStr>> {
let mut res = Vec::with_capacity(
(2 * self.mounts.len()) // --mount <MOUNT_ARG>
+ (2 * self.shm_size_mb.is_some() as usize) // --shm-size <SHM_SIZE>
+ 5 // --interactive --log-driver=journald --entrypoint <ENTRYPOINT> <IMAGE>
+ self.args.len(), // [ARG...]
);
for (volume_id, dst) in &self.mounts {
let volume = if let Some(v) = volumes.get(volume_id) {
v
} else {
continue;
};
let src = volume.path_for(ctx, pkg_id, pkg_version, volume_id);
if let Err(e) = tokio::fs::metadata(&src).await {
tracing::warn!("{} not mounted to container: {}", src.display(), e);
continue;
}
res.push(OsStr::new("--mount").into());
res.push(
OsString::from(format!(
"type=bind,src={},dst={}{}",
src.display(),
dst.display(),
if volume.readonly() { ",readonly" } else { "" }
))
.into(),
);
}
if let Some(shm_size_mb) = self.shm_size_mb {
res.push(OsStr::new("--shm-size").into());
res.push(OsString::from(format!("{}m", shm_size_mb)).into());
}
res.push(OsStr::new("--interactive").into());
if self.inject && allow_inject {
res.push(OsString::from(Self::container_name(pkg_id, None)).into());
res.push(OsStr::new(&self.entrypoint).into());
} else {
res.push(OsStr::new("--log-driver=journald").into());
res.push(OsStr::new("--entrypoint").into());
res.push(OsStr::new(&self.entrypoint).into());
if self.system {
res.push(OsString::from(self.image.for_package(SYSTEM_PACKAGE_ID, None)).into());
} else {
res.push(OsString::from(self.image.for_package(pkg_id, Some(pkg_version))).into());
}
}
res.extend(self.args.iter().map(|s| OsStr::new(s).into()));
res
}
}

File diff suppressed because it is too large Load Diff

7776
backend/src/assets/nouns.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -4,12 +4,13 @@ use std::marker::PhantomData;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use clap::ArgMatches; use clap::ArgMatches;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use patch_db::{DbHandle, LockReceipt};
use rpc_toolkit::command; use rpc_toolkit::command;
use rpc_toolkit::command_helpers::prelude::{RequestParts, ResponseParts}; use rpc_toolkit::command_helpers::prelude::{RequestParts, ResponseParts};
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value; use serde_json::Value;
use sqlx::{Executor, Sqlite}; use sqlx::{Executor, Postgres};
use tracing::instrument; use tracing::instrument;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
@@ -18,15 +19,19 @@ use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat}; use crate::util::serde::{display_serializable, IoFormat};
use crate::{ensure_code, Error, ResultExt}; use crate::{ensure_code, Error, ResultExt};
#[command(subcommands(login, logout, session))] #[command(subcommands(login, logout, session, reset_password))]
pub fn auth() -> Result<(), Error> { pub fn auth() -> Result<(), Error> {
Ok(()) Ok(())
} }
pub fn parse_metadata(_: &str, _: &ArgMatches<'_>) -> Result<Value, Error> { pub fn cli_metadata() -> Value {
Ok(serde_json::json!({ serde_json::json!({
"platforms": ["cli"], "platforms": ["cli"],
})) })
}
pub fn parse_metadata(_: &str, _: &ArgMatches) -> Result<Value, Error> {
Ok(cli_metadata())
} }
#[test] #[test]
@@ -51,7 +56,7 @@ async fn cli_login(
let password = if let Some(password) = password { let password = if let Some(password) = password {
password password
} else { } else {
rpassword::prompt_password_stdout("Password: ")? rpassword::prompt_password("Password: ")?
}; };
rpc_toolkit::command_helpers::call_remote( rpc_toolkit::command_helpers::call_remote(
@@ -82,7 +87,7 @@ pub fn check_password(hash: &str, password: &str) -> Result<(), Error> {
pub async fn check_password_against_db<Ex>(secrets: &mut Ex, password: &str) -> Result<(), Error> pub async fn check_password_against_db<Ex>(secrets: &mut Ex, password: &str) -> Result<(), Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Sqlite>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{ {
let pw_hash = sqlx::query!("SELECT password FROM account") let pw_hash = sqlx::query!("SELECT password FROM account")
.fetch_one(secrets) .fetch_one(secrets)
@@ -105,7 +110,7 @@ pub async fn login(
#[arg] password: Option<String>, #[arg] password: Option<String>,
#[arg( #[arg(
parse(parse_metadata), parse(parse_metadata),
default = "", default = "cli_metadata",
help = "RPC Only: This value cannot be overidden from the cli" help = "RPC Only: This value cannot be overidden from the cli"
)] )]
metadata: Value, metadata: Value,
@@ -119,7 +124,7 @@ pub async fn login(
let metadata = serde_json::to_string(&metadata).with_kind(crate::ErrorKind::Database)?; let metadata = serde_json::to_string(&metadata).with_kind(crate::ErrorKind::Database)?;
let hash_token_hashed = hash_token.hashed(); let hash_token_hashed = hash_token.hashed();
sqlx::query!( sqlx::query!(
"INSERT INTO session (id, user_agent, metadata) VALUES (?, ?, ?)", "INSERT INTO session (id, user_agent, metadata) VALUES ($1, $2, $3)",
hash_token_hashed, hash_token_hashed,
user_agent, user_agent,
metadata, metadata,
@@ -168,7 +173,7 @@ pub async fn session() -> Result<(), Error> {
Ok(()) Ok(())
} }
fn display_sessions(arg: SessionList, matches: &ArgMatches<'_>) { fn display_sessions(arg: SessionList, matches: &ArgMatches) {
use prettytable::*; use prettytable::*;
if matches.is_present("format") { if matches.is_present("format") {
@@ -198,7 +203,7 @@ fn display_sessions(arg: SessionList, matches: &ArgMatches<'_>) {
} }
table.add_row(row); table.add_row(row);
} }
table.print_tty(false); table.print_tty(false).unwrap();
} }
#[command(display(display_sessions))] #[command(display(display_sessions))]
@@ -234,7 +239,7 @@ pub async fn list(
}) })
} }
fn parse_comma_separated(arg: &str, _: &ArgMatches<'_>) -> Result<Vec<String>, RpcError> { fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<Vec<String>, RpcError> {
Ok(arg.split(",").map(|s| s.trim().to_owned()).collect()) Ok(arg.split(",").map(|s| s.trim().to_owned()).collect())
} }
@@ -256,3 +261,113 @@ pub async fn kill(
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId), &ctx).await?; HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId), &ctx).await?;
Ok(()) Ok(())
} }
#[instrument(skip(ctx, old_password, new_password))]
async fn cli_reset_password(
ctx: CliContext,
old_password: Option<String>,
new_password: Option<String>,
) -> Result<(), RpcError> {
let old_password = if let Some(old_password) = old_password {
old_password
} else {
rpassword::prompt_password("Current Password: ")?
};
let new_password = if let Some(new_password) = new_password {
new_password
} else {
let new_password = rpassword::prompt_password("New Password: ")?;
if new_password != rpassword::prompt_password("Confirm: ")? {
return Err(Error::new(
eyre!("Passwords do not match"),
crate::ErrorKind::IncorrectPassword,
)
.into());
}
new_password
};
rpc_toolkit::command_helpers::call_remote(
ctx,
"auth.reset-password",
serde_json::json!({ "old-password": old_password, "new-password": new_password }),
PhantomData::<()>,
)
.await?
.result?;
Ok(())
}
pub struct SetPasswordReceipt(LockReceipt<String, ()>);
impl SetPasswordReceipt {
pub async fn new<Db: DbHandle>(db: &mut Db) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let password_hash = crate::db::DatabaseModel::new()
.server_info()
.password_hash()
.make_locker(patch_db::LockType::Write)
.add_to_keys(locks);
move |skeleton_key| Ok(Self(password_hash.verify(skeleton_key)?))
}
}
pub async fn set_password<Db: DbHandle, Ex>(
db: &mut Db,
receipt: &SetPasswordReceipt,
secrets: &mut Ex,
password: &str,
) -> Result<(), Error>
where
for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{
let password = argon2::hash_encoded(
password.as_bytes(),
&rand::random::<[u8; 16]>()[..],
&argon2::Config::default(),
)
.with_kind(crate::ErrorKind::PasswordHashGeneration)?;
sqlx::query!("UPDATE account SET password = $1", password,)
.execute(secrets)
.await?;
receipt.0.set(db, password).await?;
Ok(())
}
#[command(
rename = "reset-password",
custom_cli(cli_reset_password(async, context(CliContext))),
display(display_none)
)]
#[instrument(skip(ctx, old_password, new_password))]
pub async fn reset_password(
#[context] ctx: RpcContext,
#[arg(rename = "old-password")] old_password: Option<String>,
#[arg(rename = "new-password")] new_password: Option<String>,
) -> Result<(), Error> {
let old_password = old_password.unwrap_or_default();
let new_password = new_password.unwrap_or_default();
let mut secrets = ctx.secret_store.acquire().await?;
check_password_against_db(&mut secrets, &old_password).await?;
let mut db = ctx.db.handle();
let set_password_receipt = SetPasswordReceipt::new(&mut db).await?;
set_password(&mut db, &set_password_receipt, &mut secrets, &new_password).await?;
Ok(())
}

View File

@@ -1,11 +1,13 @@
use std::collections::BTreeMap; use std::collections::{BTreeMap, BTreeSet};
use std::sync::Arc; use std::path::PathBuf;
use chrono::Utc; use chrono::Utc;
use clap::ArgMatches;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use openssl::pkey::{PKey, Private}; use openssl::pkey::{PKey, Private};
use openssl::x509::X509; use openssl::x509::X509;
use patch_db::{DbHandle, LockType, PatchDbHandle, Revision}; use patch_db::{DbHandle, LockType, PatchDbHandle};
use rpc_toolkit::command; use rpc_toolkit::command;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value; use serde_json::Value;
@@ -18,17 +20,17 @@ use super::PackageBackupReport;
use crate::auth::check_password_against_db; use crate::auth::check_password_against_db;
use crate::backup::{BackupReport, ServerBackupReport}; use crate::backup::{BackupReport, ServerBackupReport};
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::db::util::WithRevision; use crate::db::model::BackupProgress;
use crate::disk::mount::backup::BackupMountGuard; use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite; use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::TmpMountGuard; use crate::disk::mount::guard::TmpMountGuard;
use crate::notifications::NotificationLevel; use crate::notifications::NotificationLevel;
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::status::MainStatus; use crate::status::MainStatus;
use crate::util::display_none;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::util::{display_none, AtomicFile};
use crate::version::VersionT; use crate::version::VersionT;
use crate::Error; use crate::{Error, ErrorKind, ResultExt};
#[derive(Debug)] #[derive(Debug)]
pub struct OsBackup { pub struct OsBackup {
@@ -112,14 +114,26 @@ impl Serialize for OsBackup {
} }
} }
fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<BTreeSet<PackageId>, Error> {
arg.split(',')
.map(|s| s.trim().parse().map_err(Error::from))
.collect()
}
#[command(rename = "create", display(display_none))] #[command(rename = "create", display(display_none))]
#[instrument(skip(ctx, old_password, password))] #[instrument(skip(ctx, old_password, password))]
pub async fn backup_all( pub async fn backup_all(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId, #[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg(rename = "old-password", long = "old-password")] old_password: Option<String>, #[arg(rename = "old-password", long = "old-password")] old_password: Option<String>,
#[arg(
rename = "package-ids",
long = "package-ids",
parse(parse_comma_separated)
)]
package_ids: Option<BTreeSet<PackageId>>,
#[arg] password: String, #[arg] password: String,
) -> Result<WithRevision<()>, Error> { ) -> Result<(), Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
check_password_against_db(&mut ctx.secret_store.acquire().await?, &password).await?; check_password_against_db(&mut ctx.secret_store.acquire().await?, &password).await?;
let fs = target_id let fs = target_id
@@ -130,17 +144,27 @@ pub async fn backup_all(
old_password.as_ref().unwrap_or(&password), old_password.as_ref().unwrap_or(&password),
) )
.await?; .await?;
let all_packages = crate::db::DatabaseModel::new()
.package_data()
.get(&mut db, false)
.await?
.0
.keys()
.into_iter()
.cloned()
.collect();
let package_ids = package_ids.unwrap_or(all_packages);
if old_password.is_some() { if old_password.is_some() {
backup_guard.change_password(&password)?; backup_guard.change_password(&password)?;
} }
let revision = assure_backing_up(&mut db).await?; assure_backing_up(&mut db, &package_ids).await?;
tokio::task::spawn(async move { tokio::task::spawn(async move {
let backup_res = perform_backup(&ctx, &mut db, backup_guard).await; let backup_res = perform_backup(&ctx, &mut db, backup_guard, &package_ids).await;
let status_model = crate::db::DatabaseModel::new() let backup_progress = crate::db::DatabaseModel::new()
.server_info() .server_info()
.status_info() .status_info()
.backing_up(); .backup_progress();
status_model backup_progress
.clone() .clone()
.lock(&mut db, LockType::Write) .lock(&mut db, LockType::Write)
.await .await
@@ -207,36 +231,51 @@ pub async fn backup_all(
.expect("failed to send notification"); .expect("failed to send notification");
} }
} }
status_model backup_progress
.put(&mut db, &false) .delete(&mut db)
.await .await
.expect("failed to change server status"); .expect("failed to change server status");
}); });
Ok(WithRevision { Ok(())
response: (),
revision,
})
} }
#[instrument(skip(db))] #[instrument(skip(db, packages))]
async fn assure_backing_up(db: &mut PatchDbHandle) -> Result<Option<Arc<Revision>>, Error> { async fn assure_backing_up(
db: &mut PatchDbHandle,
packages: impl IntoIterator<Item = &PackageId>,
) -> Result<(), Error> {
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let mut backing_up = crate::db::DatabaseModel::new() let mut backing_up = crate::db::DatabaseModel::new()
.server_info() .server_info()
.status_info() .status_info()
.backing_up() .backup_progress()
.get_mut(&mut tx) .get_mut(&mut tx)
.await?; .await?;
if *backing_up { if backing_up
.iter()
.flat_map(|x| x.values())
.fold(false, |acc, x| {
if !x.complete {
return true;
}
acc
})
{
return Err(Error::new( return Err(Error::new(
eyre!("Server is already backing up!"), eyre!("Server is already backing up!"),
crate::ErrorKind::InvalidRequest, crate::ErrorKind::InvalidRequest,
)); ));
} }
*backing_up = true; *backing_up = Some(
packages
.into_iter()
.map(|x| (x.clone(), BackupProgress { complete: false }))
.collect(),
);
backing_up.save(&mut tx).await?; backing_up.save(&mut tx).await?;
Ok(tx.commit(None).await?) tx.commit().await?;
Ok(())
} }
#[instrument(skip(ctx, db, backup_guard))] #[instrument(skip(ctx, db, backup_guard))]
@@ -244,6 +283,7 @@ async fn perform_backup<Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
mut db: Db, mut db: Db,
mut backup_guard: BackupMountGuard<TmpMountGuard>, mut backup_guard: BackupMountGuard<TmpMountGuard>,
package_ids: &BTreeSet<PackageId>,
) -> Result<BTreeMap<PackageId, PackageBackupReport>, Error> { ) -> Result<BTreeMap<PackageId, PackageBackupReport>, Error> {
let mut backup_report = BTreeMap::new(); let mut backup_report = BTreeMap::new();
@@ -251,6 +291,8 @@ async fn perform_backup<Db: DbHandle>(
.package_data() .package_data()
.keys(&mut db, false) .keys(&mut db, false)
.await? .await?
.into_iter()
.filter(|id| package_ids.contains(id))
{ {
let mut tx = db.begin().await?; // for lock scope let mut tx = db.begin().await?; // for lock scope
let installed_model = if let Some(installed_model) = crate::db::DatabaseModel::new() let installed_model = if let Some(installed_model) = crate::db::DatabaseModel::new()
@@ -268,9 +310,11 @@ async fn perform_backup<Db: DbHandle>(
main_status_model.lock(&mut tx, LockType::Write).await?; main_status_model.lock(&mut tx, LockType::Write).await?;
let (started, health) = match main_status_model.get(&mut tx, true).await?.into_owned() { let (started, health) = match main_status_model.get(&mut tx, true).await?.into_owned() {
MainStatus::Starting => (Some(Utc::now()), Default::default()), MainStatus::Starting { .. } => (Some(Utc::now()), Default::default()),
MainStatus::Running { started, health } => (Some(started), health.clone()), MainStatus::Running { started, health } => (Some(started), health.clone()),
MainStatus::Stopped | MainStatus::Stopping => (None, Default::default()), MainStatus::Stopped | MainStatus::Stopping | MainStatus::Restarting => {
(None, Default::default())
}
MainStatus::BackingUp { .. } => { MainStatus::BackingUp { .. } => {
backup_report.insert( backup_report.insert(
package_id, package_id,
@@ -318,6 +362,7 @@ async fn perform_backup<Db: DbHandle>(
.backup .backup
.create( .create(
ctx, ctx,
&mut tx,
&package_id, &package_id,
&manifest.title, &manifest.title,
&manifest.version, &manifest.version,
@@ -341,7 +386,7 @@ async fn perform_backup<Db: DbHandle>(
backup_guard backup_guard
.metadata .metadata
.package_backups .package_backups
.insert(package_id, pkg_meta); .insert(package_id.clone(), pkg_meta);
} }
main_status_model main_status_model
@@ -353,6 +398,23 @@ async fn perform_backup<Db: DbHandle>(
}, },
) )
.await?; .await?;
let mut backup_progress = crate::db::DatabaseModel::new()
.server_info()
.status_info()
.backup_progress()
.get_mut(&mut tx)
.await?;
if backup_progress.is_none() {
*backup_progress = Some(Default::default());
}
if let Some(mut backup_progress) = backup_progress
.as_mut()
.and_then(|bp| bp.get_mut(&package_id))
{
(*backup_progress).complete = true;
}
backup_progress.save(&mut tx).await?;
tx.save().await?; tx.save().await?;
} }
@@ -361,7 +423,12 @@ async fn perform_backup<Db: DbHandle>(
.await?; .await?;
let (root_ca_key, root_ca_cert) = ctx.net_controller.ssl.export_root_ca().await?; let (root_ca_key, root_ca_cert) = ctx.net_controller.ssl.export_root_ca().await?;
let mut os_backup_file = AtomicFile::new(backup_guard.as_ref().join("os-backup.cbor")).await?; let mut os_backup_file = AtomicFile::new(
backup_guard.as_ref().join("os-backup.cbor"),
None::<PathBuf>,
)
.await
.with_kind(ErrorKind::Filesystem)?;
os_backup_file os_backup_file
.write_all( .write_all(
&IoFormat::Cbor.to_vec(&OsBackup { &IoFormat::Cbor.to_vec(&OsBackup {
@@ -376,7 +443,10 @@ async fn perform_backup<Db: DbHandle>(
})?, })?,
) )
.await?; .await?;
os_backup_file.save().await?; os_backup_file
.save()
.await
.with_kind(ErrorKind::Filesystem)?;
let timestamp = Some(Utc::now()); let timestamp = Some(Utc::now());
@@ -392,6 +462,5 @@ async fn perform_backup<Db: DbHandle>(
.last_backup() .last_backup()
.put(&mut db, &timestamp) .put(&mut db, &timestamp)
.await?; .await?;
Ok(backup_report) Ok(backup_report)
} }

View File

@@ -1,29 +1,31 @@
use std::collections::{BTreeMap, BTreeSet}; use std::collections::{BTreeMap, BTreeSet};
use std::path::Path; use std::path::{Path, PathBuf};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use patch_db::{DbHandle, HasModel, LockType}; use patch_db::{DbHandle, HasModel, LockType};
use reqwest::Url;
use rpc_toolkit::command; use rpc_toolkit::command;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sqlx::{Executor, Sqlite}; use sqlx::{Executor, Postgres};
use tokio::fs::File; use tokio::fs::File;
use tokio::io::AsyncWriteExt; use tokio::io::AsyncWriteExt;
use tracing::instrument; use tracing::instrument;
use self::target::PackageBackupInfo; use self::target::PackageBackupInfo;
use crate::action::{ActionImplementation, NoOutput};
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::dependencies::reconfigure_dependents_with_live_pointers; use crate::dependencies::reconfigure_dependents_with_live_pointers;
use crate::id::ImageId; use crate::id::ImageId;
use crate::install::PKG_ARCHIVE_DIR; use crate::install::PKG_ARCHIVE_DIR;
use crate::net::interface::{InterfaceId, Interfaces}; use crate::net::interface::{InterfaceId, Interfaces};
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::util::{AtomicFile, Version}; use crate::util::Version;
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
use crate::volume::{backup_dir, Volume, VolumeId, Volumes, BACKUP_DIR}; use crate::volume::{backup_dir, Volume, VolumeId, Volumes, BACKUP_DIR};
use crate::{Error, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
pub mod backup_bulk; pub mod backup_bulk;
pub mod restore; pub mod restore;
@@ -60,28 +62,35 @@ pub fn package_backup() -> Result<(), Error> {
struct BackupMetadata { struct BackupMetadata {
pub timestamp: DateTime<Utc>, pub timestamp: DateTime<Utc>,
pub tor_keys: BTreeMap<InterfaceId, String>, pub tor_keys: BTreeMap<InterfaceId, String>,
pub marketplace_url: Option<Url>,
} }
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)] #[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
pub struct BackupActions { pub struct BackupActions {
pub create: ActionImplementation, pub create: PackageProcedure,
pub restore: ActionImplementation, pub restore: PackageProcedure,
} }
impl BackupActions { impl BackupActions {
pub fn validate(&self, volumes: &Volumes, image_ids: &BTreeSet<ImageId>) -> Result<(), Error> { pub fn validate(
&self,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.create self.create
.validate(volumes, image_ids, false) .validate(eos_version, volumes, image_ids, false)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Backup Create"))?; .with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Backup Create"))?;
self.restore self.restore
.validate(volumes, image_ids, false) .validate(eos_version, volumes, image_ids, false)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Backup Restore"))?; .with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Backup Restore"))?;
Ok(()) Ok(())
} }
#[instrument(skip(ctx))] #[instrument(skip(ctx, db))]
pub async fn create( pub async fn create<Db: DbHandle>(
&self, &self,
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db,
pkg_id: &PackageId, pkg_id: &PackageId,
pkg_title: &str, pkg_title: &str,
pkg_version: &Version, pkg_version: &Version,
@@ -99,7 +108,7 @@ impl BackupActions {
ctx, ctx,
pkg_id, pkg_id,
pkg_version, pkg_version,
Some("CreateBackup"), ProcedureName::CreateBackup,
&volumes, &volumes,
None, None,
false, false,
@@ -119,6 +128,18 @@ impl BackupActions {
) )
}) })
.collect(); .collect();
let marketplace_url = crate::db::DatabaseModel::new()
.package_data()
.idx_model(pkg_id)
.expect(db)
.await?
.installed()
.expect(db)
.await?
.marketplace_url()
.get(db, true)
.await?
.into_owned();
let tmp_path = Path::new(BACKUP_DIR) let tmp_path = Path::new(BACKUP_DIR)
.join(pkg_id) .join(pkg_id)
.join(format!("{}.s9pk", pkg_id)); .join(format!("{}.s9pk", pkg_id));
@@ -129,7 +150,9 @@ impl BackupActions {
.join(pkg_version.as_str()) .join(pkg_version.as_str())
.join(format!("{}.s9pk", pkg_id)); .join(format!("{}.s9pk", pkg_id));
let mut infile = File::open(&s9pk_path).await?; let mut infile = File::open(&s9pk_path).await?;
let mut outfile = AtomicFile::new(&tmp_path).await?; let mut outfile = AtomicFile::new(&tmp_path, None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
tokio::io::copy(&mut infile, &mut *outfile) tokio::io::copy(&mut infile, &mut *outfile)
.await .await
.with_ctx(|_| { .with_ctx(|_| {
@@ -138,17 +161,20 @@ impl BackupActions {
format!("cp {} -> {}", s9pk_path.display(), tmp_path.display()), format!("cp {} -> {}", s9pk_path.display(), tmp_path.display()),
) )
})?; })?;
outfile.save().await?; outfile.save().await.with_kind(ErrorKind::Filesystem)?;
let timestamp = Utc::now(); let timestamp = Utc::now();
let metadata_path = Path::new(BACKUP_DIR).join(pkg_id).join("metadata.cbor"); let metadata_path = Path::new(BACKUP_DIR).join(pkg_id).join("metadata.cbor");
let mut outfile = AtomicFile::new(&metadata_path).await?; let mut outfile = AtomicFile::new(&metadata_path, None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
outfile outfile
.write_all(&IoFormat::Cbor.to_vec(&BackupMetadata { .write_all(&IoFormat::Cbor.to_vec(&BackupMetadata {
timestamp, timestamp,
tor_keys, tor_keys,
marketplace_url,
})?) })?)
.await?; .await?;
outfile.save().await?; outfile.save().await.with_kind(ErrorKind::Filesystem)?;
Ok(PackageBackupInfo { Ok(PackageBackupInfo {
os_version: Current::new().semver().into(), os_version: Current::new().semver().into(),
title: pkg_title.to_owned(), title: pkg_title.to_owned(),
@@ -169,7 +195,7 @@ impl BackupActions {
volumes: &Volumes, volumes: &Volumes,
) -> Result<(), Error> ) -> Result<(), Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Sqlite>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{ {
let mut volumes = volumes.clone(); let mut volumes = volumes.clone();
volumes.insert(VolumeId::Backup, Volume::Backup { readonly: true }); volumes.insert(VolumeId::Backup, Volume::Backup { readonly: true });
@@ -178,7 +204,7 @@ impl BackupActions {
ctx, ctx,
pkg_id, pkg_id,
pkg_version, pkg_version,
Some("RestoreBackup"), ProcedureName::RestoreBackup,
&volumes, &volumes,
None, None,
false, false,
@@ -205,7 +231,7 @@ impl BackupActions {
) )
})?; })?;
sqlx::query!( sqlx::query!(
"REPLACE INTO tor (package, interface, key) VALUES (?, ?, ?)", "INSERT INTO tor (package, interface, key) VALUES ($1, $2, $3) ON CONFLICT (package, interface) DO UPDATE SET key = $3",
**pkg_id, **pkg_id,
*iface, *iface,
key_vec, key_vec,
@@ -217,17 +243,21 @@ impl BackupActions {
.package_data() .package_data()
.lock(db, LockType::Write) .lock(db, LockType::Write)
.await?; .await?;
crate::db::DatabaseModel::new() let pde = crate::db::DatabaseModel::new()
.package_data() .package_data()
.idx_model(pkg_id) .idx_model(pkg_id)
.expect(db) .expect(db)
.await? .await?
.installed() .installed()
.expect(db) .expect(db)
.await? .await?;
pde.clone()
.interface_addresses() .interface_addresses()
.put(db, &interfaces.install(&mut *secrets, pkg_id).await?) .put(db, &interfaces.install(&mut *secrets, pkg_id).await?)
.await?; .await?;
pde.marketplace_url()
.put(db, &metadata.marketplace_url)
.await?;
let entry = crate::db::DatabaseModel::new() let entry = crate::db::DatabaseModel::new()
.package_data() .package_data()
@@ -240,7 +270,8 @@ impl BackupActions {
.get(db, true) .get(db, true)
.await?; .await?;
reconfigure_dependents_with_live_pointers(ctx, db, &entry).await?; let receipts = crate::config::ConfigReceipts::new(db).await?;
reconfigure_dependents_with_live_pointers(ctx, db, &receipts, &entry).await?;
Ok(()) Ok(())
} }

View File

@@ -9,7 +9,7 @@ use color_eyre::eyre::eyre;
use futures::future::BoxFuture; use futures::future::BoxFuture;
use futures::FutureExt; use futures::FutureExt;
use openssl::x509::X509; use openssl::x509::X509;
use patch_db::{DbHandle, PatchDbHandle, Revision}; use patch_db::{DbHandle, PatchDbHandle};
use rpc_toolkit::command; use rpc_toolkit::command;
use tokio::fs::File; use tokio::fs::File;
use tokio::task::JoinHandle; use tokio::task::JoinHandle;
@@ -20,13 +20,13 @@ use super::target::BackupTargetId;
use crate::backup::backup_bulk::OsBackup; use crate::backup::backup_bulk::OsBackup;
use crate::context::{RpcContext, SetupContext}; use crate::context::{RpcContext, SetupContext};
use crate::db::model::{PackageDataEntry, StaticFiles}; use crate::db::model::{PackageDataEntry, StaticFiles};
use crate::db::util::WithRevision;
use crate::disk::mount::backup::{BackupMountGuard, PackageBackupMountGuard}; use crate::disk::mount::backup::{BackupMountGuard, PackageBackupMountGuard};
use crate::disk::mount::filesystem::ReadOnly; use crate::disk::mount::filesystem::ReadOnly;
use crate::disk::mount::guard::TmpMountGuard; use crate::disk::mount::guard::TmpMountGuard;
use crate::install::progress::InstallProgress; use crate::install::progress::InstallProgress;
use crate::install::{download_install_s9pk, PKG_PUBLIC_DIR}; use crate::install::{download_install_s9pk, PKG_PUBLIC_DIR};
use crate::net::ssl::SslManager; use crate::net::ssl::SslManager;
use crate::notifications::NotificationLevel;
use crate::s9pk::manifest::{Manifest, PackageId}; use crate::s9pk::manifest::{Manifest, PackageId};
use crate::s9pk::reader::S9pkReader; use crate::s9pk::reader::S9pkReader;
use crate::setup::RecoveryStatus; use crate::setup::RecoveryStatus;
@@ -34,53 +34,85 @@ use crate::util::display_none;
use crate::util::io::dir_size; use crate::util::io::dir_size;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::volume::{backup_dir, BACKUP_DIR, PKG_VOLUME_DIR}; use crate::volume::{backup_dir, BACKUP_DIR, PKG_VOLUME_DIR};
use crate::{auth::check_password_against_db, notifications::NotificationLevel};
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
fn parse_comma_separated(arg: &str, _: &ArgMatches<'_>) -> Result<Vec<PackageId>, Error> { fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<Vec<PackageId>, Error> {
arg.split(',') arg.split(',')
.map(|s| s.trim().parse().map_err(Error::from)) .map(|s| s.trim().parse().map_err(Error::from))
.collect() .collect()
} }
#[command(rename = "restore", display(display_none))] #[command(rename = "restore", display(display_none))]
#[instrument(skip(ctx, old_password, password))] #[instrument(skip(ctx, password))]
pub async fn restore_packages_rpc( pub async fn restore_packages_rpc(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg(parse(parse_comma_separated))] ids: Vec<PackageId>, #[arg(parse(parse_comma_separated))] ids: Vec<PackageId>,
#[arg(rename = "target-id")] target_id: BackupTargetId, #[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg(rename = "old-password", long = "old-password")] old_password: Option<String>,
#[arg] password: String, #[arg] password: String,
) -> Result<WithRevision<()>, Error> { ) -> Result<(), Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
check_password_against_db(&mut ctx.secret_store.acquire().await?, &password).await?;
let fs = target_id let fs = target_id
.load(&mut ctx.secret_store.acquire().await?) .load(&mut ctx.secret_store.acquire().await?)
.await?; .await?;
let mut backup_guard = BackupMountGuard::mount( let backup_guard =
TmpMountGuard::mount(&fs, ReadOnly).await?, BackupMountGuard::mount(TmpMountGuard::mount(&fs, ReadOnly).await?, &password).await?;
old_password.as_ref().unwrap_or(&password),
)
.await?;
if old_password.is_some() {
backup_guard.change_password(&password)?;
}
let (revision, backup_guard, tasks, _) = let (backup_guard, tasks, _) = restore_packages(&ctx, &mut db, backup_guard, ids).await?;
restore_packages(&ctx, &mut db, backup_guard, ids).await?;
tokio::spawn(async { tokio::spawn(async move {
futures::future::join_all(tasks).await; let res = futures::future::join_all(tasks).await;
for res in res {
match res.with_kind(crate::ErrorKind::Unknown) {
Ok((Ok(_), _)) => (),
Ok((Err(err), package_id)) => {
if let Err(err) = ctx
.notification_manager
.notify(
&mut db,
Some(package_id.clone()),
NotificationLevel::Error,
"Restoration Failure".to_string(),
format!("Error restoring package {}: {}", package_id, err),
(),
None,
)
.await
{
tracing::error!("Failed to notify: {}", err);
tracing::debug!("{:?}", err);
};
tracing::error!("Error restoring package {}: {}", package_id, err);
tracing::debug!("{:?}", err);
}
Err(e) => {
if let Err(err) = ctx
.notification_manager
.notify(
&mut db,
None,
NotificationLevel::Error,
"Restoration Failure".to_string(),
format!("Error during restoration: {}", e),
(),
None,
)
.await
{
tracing::error!("Failed to notify: {}", err);
tracing::debug!("{:?}", err);
}
tracing::error!("Error restoring packages: {}", e);
tracing::debug!("{:?}", e);
}
}
}
if let Err(e) = backup_guard.unmount().await { if let Err(e) = backup_guard.unmount().await {
tracing::error!("Error unmounting backup drive: {}", e); tracing::error!("Error unmounting backup drive: {}", e);
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
} }
}); });
Ok(WithRevision { Ok(())
response: (),
revision,
})
} }
async fn approximate_progress( async fn approximate_progress(
@@ -184,7 +216,7 @@ pub async fn recover_full_embassy(
let key_vec = os_backup.tor_key.as_bytes().to_vec(); let key_vec = os_backup.tor_key.as_bytes().to_vec();
let secret_store = ctx.secret_store().await?; let secret_store = ctx.secret_store().await?;
sqlx::query!( sqlx::query!(
"REPLACE INTO account (id, password, tor_key) VALUES (?, ?, ?)", "INSERT INTO account (id, password, tor_key) VALUES ($1, $2, $3) ON CONFLICT (id) DO UPDATE SET password = $2, tor_key = $3",
0, 0,
password, password,
key_vec, key_vec,
@@ -213,7 +245,7 @@ pub async fn recover_full_embassy(
.keys() .keys()
.cloned() .cloned()
.collect(); .collect();
let (_, backup_guard, tasks, progress_info) = restore_packages( let (backup_guard, tasks, progress_info) = restore_packages(
&rpc_ctx, &rpc_ctx,
&mut db, &mut db,
backup_guard, backup_guard,
@@ -230,7 +262,7 @@ pub async fn recover_full_embassy(
if let Err(err) = rpc_ctx.notification_manager.notify( if let Err(err) = rpc_ctx.notification_manager.notify(
&mut db, &mut db,
Some(package_id.clone()), Some(package_id.clone()),
NotificationLevel::Error, NotificationLevel::Error,
"Restoration Failure".to_string(), format!("Error restoring package {}: {}", package_id,err), (), None).await{ "Restoration Failure".to_string(), format!("Error restoring package {}: {}", package_id,err), (), None).await{
tracing::error!("Failed to notify: {}", err); tracing::error!("Failed to notify: {}", err);
tracing::debug!("{:?}", err); tracing::debug!("{:?}", err);
@@ -242,8 +274,8 @@ pub async fn recover_full_embassy(
if let Err(err) = rpc_ctx.notification_manager.notify( if let Err(err) = rpc_ctx.notification_manager.notify(
&mut db, &mut db,
None, None,
NotificationLevel::Error, NotificationLevel::Error,
"Restoration Failure".to_string(), format!("Error restoring ?: {}", e), (), None).await { "Restoration Failure".to_string(), format!("Error during restoration: {}", e), (), None).await {
tracing::error!("Failed to notify: {}", err); tracing::error!("Failed to notify: {}", err);
tracing::debug!("{:?}", err); tracing::debug!("{:?}", err);
@@ -271,14 +303,13 @@ async fn restore_packages(
ids: Vec<PackageId>, ids: Vec<PackageId>,
) -> Result< ) -> Result<
( (
Option<Arc<Revision>>,
BackupMountGuard<TmpMountGuard>, BackupMountGuard<TmpMountGuard>,
Vec<JoinHandle<(Result<(), Error>, PackageId)>>, Vec<JoinHandle<(Result<(), Error>, PackageId)>>,
ProgressInfo, ProgressInfo,
), ),
Error, Error,
> { > {
let (revision, guards) = assure_restoring(ctx, db, ids, &backup_guard).await?; let guards = assure_restoring(ctx, db, ids, &backup_guard).await?;
let mut progress_info = ProgressInfo::default(); let mut progress_info = ProgressInfo::default();
@@ -306,7 +337,7 @@ async fn restore_packages(
)); ));
} }
Ok((revision, backup_guard, tasks, progress_info)) Ok((backup_guard, tasks, progress_info))
} }
#[instrument(skip(ctx, db, backup_guard))] #[instrument(skip(ctx, db, backup_guard))]
@@ -315,13 +346,7 @@ async fn assure_restoring(
db: &mut PatchDbHandle, db: &mut PatchDbHandle,
ids: Vec<PackageId>, ids: Vec<PackageId>,
backup_guard: &BackupMountGuard<TmpMountGuard>, backup_guard: &BackupMountGuard<TmpMountGuard>,
) -> Result< ) -> Result<Vec<(Manifest, PackageBackupMountGuard)>, Error> {
(
Option<Arc<Revision>>,
Vec<(Manifest, PackageBackupMountGuard)>,
),
Error,
> {
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let mut guards = Vec::with_capacity(ids.len()); let mut guards = Vec::with_capacity(ids.len());
@@ -381,7 +406,8 @@ async fn assure_restoring(
guards.push((manifest, guard)); guards.push((manifest, guard));
} }
Ok((tx.commit(None).await?, guards)) tx.commit().await?;
Ok(guards)
} }
#[instrument(skip(ctx, guard))] #[instrument(skip(ctx, guard))]

View File

@@ -4,7 +4,7 @@ use color_eyre::eyre::eyre;
use futures::TryStreamExt; use futures::TryStreamExt;
use rpc_toolkit::command; use rpc_toolkit::command;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sqlx::{Executor, Sqlite}; use sqlx::{Executor, Postgres};
use super::{BackupTarget, BackupTargetId}; use super::{BackupTarget, BackupTargetId};
use crate::context::RpcContext; use crate::context::RpcContext;
@@ -49,8 +49,8 @@ pub async fn add(
let embassy_os = recovery_info(&guard).await?; let embassy_os = recovery_info(&guard).await?;
guard.unmount().await?; guard.unmount().await?;
let path_string = Path::new("/").join(&cifs.path).display().to_string(); let path_string = Path::new("/").join(&cifs.path).display().to_string();
let id: u32 = sqlx::query!( let id: i32 = sqlx::query!(
"INSERT INTO cifs_shares (hostname, path, username, password) VALUES (?, ?, ?, ?) RETURNING id AS \"id: u32\"", "INSERT INTO cifs_shares (hostname, path, username, password) VALUES ($1, $2, $3, $4) RETURNING id",
cifs.hostname, cifs.hostname,
path_string, path_string,
cifs.username, cifs.username,
@@ -98,7 +98,7 @@ pub async fn update(
guard.unmount().await?; guard.unmount().await?;
let path_string = Path::new("/").join(&cifs.path).display().to_string(); let path_string = Path::new("/").join(&cifs.path).display().to_string();
if sqlx::query!( if sqlx::query!(
"UPDATE cifs_shares SET hostname = ?, path = ?, username = ?, password = ? WHERE id = ?", "UPDATE cifs_shares SET hostname = $1, path = $2, username = $3, password = $4 WHERE id = $5",
cifs.hostname, cifs.hostname,
path_string, path_string,
cifs.username, cifs.username,
@@ -137,7 +137,7 @@ pub async fn remove(#[context] ctx: RpcContext, #[arg] id: BackupTargetId) -> Re
crate::ErrorKind::NotFound, crate::ErrorKind::NotFound,
)); ));
}; };
if sqlx::query!("DELETE FROM cifs_shares WHERE id = ?", id) if sqlx::query!("DELETE FROM cifs_shares WHERE id = $1", id)
.execute(&ctx.secret_store) .execute(&ctx.secret_store)
.await? .await?
.rows_affected() .rows_affected()
@@ -151,12 +151,12 @@ pub async fn remove(#[context] ctx: RpcContext, #[arg] id: BackupTargetId) -> Re
Ok(()) Ok(())
} }
pub async fn load<Ex>(secrets: &mut Ex, id: u32) -> Result<Cifs, Error> pub async fn load<Ex>(secrets: &mut Ex, id: i32) -> Result<Cifs, Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Sqlite>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{ {
let record = sqlx::query!( let record = sqlx::query!(
"SELECT hostname, path, username, password FROM cifs_shares WHERE id = ?", "SELECT hostname, path, username, password FROM cifs_shares WHERE id = $1",
id id
) )
.fetch_one(secrets) .fetch_one(secrets)
@@ -170,14 +170,13 @@ where
}) })
} }
pub async fn list<Ex>(secrets: &mut Ex) -> Result<Vec<(u32, CifsBackupTarget)>, Error> pub async fn list<Ex>(secrets: &mut Ex) -> Result<Vec<(i32, CifsBackupTarget)>, Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Sqlite>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{ {
let mut records = sqlx::query!( let mut records =
"SELECT id AS \"id: u32\", hostname, path, username, password FROM cifs_shares" sqlx::query!("SELECT id, hostname, path, username, password FROM cifs_shares")
) .fetch_many(secrets);
.fetch_many(secrets);
let mut cifs = Vec::new(); let mut cifs = Vec::new();
while let Some(query_result) = records.try_next().await? { while let Some(query_result) = records.try_next().await? {

View File

@@ -6,11 +6,11 @@ use chrono::{DateTime, Utc};
use clap::ArgMatches; use clap::ArgMatches;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use digest::Digest; use digest::OutputSizeUser;
use rpc_toolkit::command; use rpc_toolkit::command;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::Sha256; use sha2::Sha256;
use sqlx::{Executor, Sqlite}; use sqlx::{Executor, Postgres};
use tracing::instrument; use tracing::instrument;
use self::cifs::CifsBackupTarget; use self::cifs::CifsBackupTarget;
@@ -45,12 +45,12 @@ pub enum BackupTarget {
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, PartialEq, Eq, PartialOrd, Ord)]
pub enum BackupTargetId { pub enum BackupTargetId {
Disk { logicalname: PathBuf }, Disk { logicalname: PathBuf },
Cifs { id: u32 }, Cifs { id: i32 },
} }
impl BackupTargetId { impl BackupTargetId {
pub async fn load<Ex>(self, secrets: &mut Ex) -> Result<BackupTargetFS, Error> pub async fn load<Ex>(self, secrets: &mut Ex) -> Result<BackupTargetFS, Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Sqlite>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{ {
Ok(match self { Ok(match self {
BackupTargetId::Disk { logicalname } => { BackupTargetId::Disk { logicalname } => {
@@ -119,7 +119,9 @@ impl FileSystem for BackupTargetFS {
BackupTargetFS::Cifs(a) => a.mount(mountpoint, mount_type).await, BackupTargetFS::Cifs(a) => a.mount(mountpoint, mount_type).await,
} }
} }
async fn source_hash(&self) -> Result<GenericArray<u8, <Sha256 as Digest>::OutputSize>, Error> { async fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> {
match self { match self {
BackupTargetFS::Disk(a) => a.source_hash().await, BackupTargetFS::Disk(a) => a.source_hash().await,
BackupTargetFS::Cifs(a) => a.source_hash().await, BackupTargetFS::Cifs(a) => a.source_hash().await,
@@ -132,7 +134,6 @@ pub fn target() -> Result<(), Error> {
Ok(()) Ok(())
} }
// TODO: incorporate reconnect into this response as well
#[command(display(display_serializable))] #[command(display(display_serializable))]
pub async fn list( pub async fn list(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
@@ -141,7 +142,6 @@ pub async fn list(
let (disks_res, cifs) = let (disks_res, cifs) =
tokio::try_join!(crate::disk::util::list(), cifs::list(&mut sql_handle),)?; tokio::try_join!(crate::disk::util::list(), cifs::list(&mut sql_handle),)?;
Ok(disks_res Ok(disks_res
.disks
.into_iter() .into_iter()
.flat_map(|mut disk| { .flat_map(|mut disk| {
std::mem::take(&mut disk.partitions) std::mem::take(&mut disk.partitions)
@@ -184,7 +184,7 @@ pub struct PackageBackupInfo {
pub timestamp: DateTime<Utc>, pub timestamp: DateTime<Utc>,
} }
fn display_backup_info(info: BackupInfo, matches: &ArgMatches<'_>) { fn display_backup_info(info: BackupInfo, matches: &ArgMatches) {
use prettytable::*; use prettytable::*;
if matches.is_present("format") { if matches.is_present("format") {
@@ -217,7 +217,7 @@ fn display_backup_info(info: BackupInfo, matches: &ArgMatches<'_>) {
]; ];
table.add_row(row); table.add_row(row);
} }
table.print_tty(false); table.print_tty(false).unwrap();
} }
#[command(display(display_backup_info))] #[command(display(display_backup_info))]

View File

@@ -0,0 +1,163 @@
use avahi_sys::{
self, avahi_client_errno, avahi_entry_group_add_service, avahi_entry_group_commit,
avahi_strerror, AvahiClient,
};
fn log_str_error(action: &str, e: i32) {
unsafe {
let e_str = avahi_strerror(e);
eprintln!(
"Could not {}: {:?}",
action,
std::ffi::CStr::from_ptr(e_str)
);
}
}
fn main() {
let aliases: Vec<_> = std::env::args().skip(1).collect();
unsafe {
let simple_poll = avahi_sys::avahi_simple_poll_new();
let poll = avahi_sys::avahi_simple_poll_get(simple_poll);
let mut box_err = Box::pin(0 as i32);
let err_c: *mut i32 = box_err.as_mut().get_mut();
let avahi_client = avahi_sys::avahi_client_new(
poll,
avahi_sys::AvahiClientFlags::AVAHI_CLIENT_NO_FAIL,
Some(client_callback),
std::ptr::null_mut(),
err_c,
);
if avahi_client == std::ptr::null_mut::<AvahiClient>() {
log_str_error("create Avahi client", *box_err);
panic!("Failed to create Avahi Client");
}
let group = avahi_sys::avahi_entry_group_new(
avahi_client,
Some(entry_group_callback),
std::ptr::null_mut(),
);
if group == std::ptr::null_mut() {
log_str_error("create Avahi entry group", avahi_client_errno(avahi_client));
panic!("Failed to create Avahi Entry Group");
}
let mut hostname_buf = vec![0];
let hostname_raw = avahi_sys::avahi_client_get_host_name_fqdn(avahi_client);
hostname_buf.extend_from_slice(std::ffi::CStr::from_ptr(hostname_raw).to_bytes_with_nul());
let buflen = hostname_buf.len();
debug_assert!(hostname_buf.ends_with(b".local\0"));
debug_assert!(!hostname_buf[..(buflen - 7)].contains(&b'.'));
// assume fixed length prefix on hostname due to local address
hostname_buf[0] = (buflen - 8) as u8; // set the prefix length to len - 8 (leading byte, .local, nul) for the main address
hostname_buf[buflen - 7] = 5; // set the prefix length to 5 for "local"
let mut res;
let http_tcp_cstr =
std::ffi::CString::new("_http._tcp").expect("Could not cast _http._tcp to c string");
res = avahi_entry_group_add_service(
group,
avahi_sys::AVAHI_IF_UNSPEC,
avahi_sys::AVAHI_PROTO_UNSPEC,
avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_USE_MULTICAST,
hostname_raw,
http_tcp_cstr.as_ptr(),
std::ptr::null(),
std::ptr::null(),
443,
// below is a secret final argument that the type signature of this function does not tell you that it
// needs. This is because the C lib function takes a variable number of final arguments indicating the
// desired TXT records to add to this service entry. The way it decides when to stop taking arguments
// from the stack and dereferencing them is when it finds a null pointer...because fuck you, that's why.
// The consequence of this is that forgetting this last argument will cause segfaults or other undefined
// behavior. Welcome back to the stone age motherfucker.
std::ptr::null::<libc::c_char>(),
);
if res < avahi_sys::AVAHI_OK {
log_str_error("add service to Avahi entry group", res);
panic!("Failed to load Avahi services");
}
eprintln!("Published {:?}", std::ffi::CStr::from_ptr(hostname_raw));
for alias in aliases {
let lan_address = alias + ".local";
let lan_address_ptr = std::ffi::CString::new(lan_address)
.expect("Could not cast lan address to c string");
res = avahi_sys::avahi_entry_group_add_record(
group,
avahi_sys::AVAHI_IF_UNSPEC,
avahi_sys::AVAHI_PROTO_UNSPEC,
avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_USE_MULTICAST
| avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_ALLOW_MULTIPLE,
lan_address_ptr.as_ptr(),
avahi_sys::AVAHI_DNS_CLASS_IN as u16,
avahi_sys::AVAHI_DNS_TYPE_CNAME as u16,
avahi_sys::AVAHI_DEFAULT_TTL,
hostname_buf.as_ptr().cast(),
hostname_buf.len(),
);
if res < avahi_sys::AVAHI_OK {
log_str_error("add CNAME record to Avahi entry group", res);
panic!("Failed to load Avahi services");
}
eprintln!("Published {:?}", lan_address_ptr);
}
let commit_err = avahi_entry_group_commit(group);
if commit_err < avahi_sys::AVAHI_OK {
log_str_error("reset Avahi entry group", commit_err);
panic!("Failed to load Avahi services: reset");
}
}
std::thread::park()
}
unsafe extern "C" fn entry_group_callback(
_group: *mut avahi_sys::AvahiEntryGroup,
state: avahi_sys::AvahiEntryGroupState,
_userdata: *mut core::ffi::c_void,
) {
match state {
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_FAILURE => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_FAILURE");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_COLLISION => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_COLLISION");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_UNCOMMITED => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_UNCOMMITED");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_ESTABLISHED => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_ESTABLISHED");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_REGISTERING => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_REGISTERING");
}
other => {
eprintln!("AvahiCallback: EntryGroupState = {}", other);
}
}
}
unsafe extern "C" fn client_callback(
_group: *mut avahi_sys::AvahiClient,
state: avahi_sys::AvahiClientState,
_userdata: *mut core::ffi::c_void,
) {
match state {
avahi_sys::AvahiClientState_AVAHI_CLIENT_FAILURE => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_FAILURE");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_RUNNING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_RUNNING");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_CONNECTING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_CONNECTING");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_COLLISION => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_COLLISION");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_REGISTERING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_REGISTERING");
}
other => {
eprintln!("AvahiCallback: ClientState = {}", other);
}
}
}

View File

@@ -7,20 +7,24 @@ use rpc_toolkit::run_cli;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use serde_json::Value; use serde_json::Value;
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
fn inner_main() -> Result<(), Error> { fn inner_main() -> Result<(), Error> {
run_cli!({ run_cli!({
command: embassy::main_api, command: embassy::main_api,
app: app => app app: app => app
.name("Embassy CLI") .name("Embassy CLI")
.version(Current::new().semver().to_string().as_str()) .version(&**VERSION_STRING)
.arg( .arg(
clap::Arg::with_name("config") clap::Arg::with_name("config")
.short("c") .short('c')
.long("config") .long("config")
.takes_value(true), .takes_value(true),
) )
.arg(Arg::with_name("host").long("host").short("h").takes_value(true)) .arg(Arg::with_name("host").long("host").short('h').takes_value(true))
.arg(Arg::with_name("proxy").long("proxy").short("p").takes_value(true)), .arg(Arg::with_name("proxy").long("proxy").short('p').takes_value(true)),
context: matches => { context: matches => {
EmbassyLogger::init(); EmbassyLogger::init();
CliContext::init(matches)? CliContext::init(matches)?

View File

@@ -7,17 +7,16 @@ use embassy::context::{DiagnosticContext, SetupContext};
use embassy::disk::fsck::RepairStrategy; use embassy::disk::fsck::RepairStrategy;
use embassy::disk::main::DEFAULT_PASSWORD; use embassy::disk::main::DEFAULT_PASSWORD;
use embassy::disk::REPAIR_DISK_PATH; use embassy::disk::REPAIR_DISK_PATH;
use embassy::hostname::get_product_key; use embassy::init::STANDBY_MODE_PATH;
use embassy::middleware::cors::cors; use embassy::middleware::cors::cors;
use embassy::middleware::diagnostic::diagnostic; use embassy::middleware::diagnostic::diagnostic;
use embassy::middleware::encrypt::encrypt;
#[cfg(feature = "avahi")] #[cfg(feature = "avahi")]
use embassy::net::mdns::MdnsController; use embassy::net::mdns::MdnsController;
use embassy::shutdown::Shutdown; use embassy::shutdown::Shutdown;
use embassy::sound::CHIME; use embassy::sound::CHIME;
use embassy::util::logger::EmbassyLogger; use embassy::util::logger::EmbassyLogger;
use embassy::util::Invoke; use embassy::util::Invoke;
use embassy::{Error, ResultExt}; use embassy::{Error, ErrorKind, ResultExt};
use http::StatusCode; use http::StatusCode;
use rpc_toolkit::rpc_server; use rpc_toolkit::rpc_server;
use tokio::process::Command; use tokio::process::Command;
@@ -49,12 +48,6 @@ async fn setup_or_init(cfg_path: Option<&str>) -> Result<(), Error> {
.invoke(embassy::ErrorKind::Nginx) .invoke(embassy::ErrorKind::Nginx)
.await?; .await?;
let ctx = SetupContext::init(cfg_path).await?; let ctx = SetupContext::init(cfg_path).await?;
let keysource_ctx = ctx.clone();
let keysource = move || {
let ctx = keysource_ctx.clone();
async move { ctx.product_key().await }
};
let encrypt = encrypt(keysource);
tokio::time::sleep(Duration::from_secs(1)).await; // let the record state that I hate this tokio::time::sleep(Duration::from_secs(1)).await; // let the record state that I hate this
CHIME.play().await?; CHIME.play().await?;
rpc_server!({ rpc_server!({
@@ -63,7 +56,6 @@ async fn setup_or_init(cfg_path: Option<&str>) -> Result<(), Error> {
status: status_fn, status: status_fn,
middleware: [ middleware: [
cors, cors,
encrypt,
] ]
}) })
.with_graceful_shutdown({ .with_graceful_shutdown({
@@ -79,7 +71,7 @@ async fn setup_or_init(cfg_path: Option<&str>) -> Result<(), Error> {
let guid_string = tokio::fs::read_to_string("/embassy-os/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy let guid_string = tokio::fs::read_to_string("/embassy-os/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await?; .await?;
let guid = guid_string.trim(); let guid = guid_string.trim();
let reboot = embassy::disk::main::import( let requires_reboot = embassy::disk::main::import(
guid, guid,
cfg.datadir(), cfg.datadir(),
if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() { if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() {
@@ -95,14 +87,14 @@ async fn setup_or_init(cfg_path: Option<&str>) -> Result<(), Error> {
.await .await
.with_ctx(|_| (embassy::ErrorKind::Filesystem, REPAIR_DISK_PATH))?; .with_ctx(|_| (embassy::ErrorKind::Filesystem, REPAIR_DISK_PATH))?;
} }
if reboot.0 { if requires_reboot.0 {
embassy::disk::main::export(guid, cfg.datadir()).await?; embassy::disk::main::export(guid, cfg.datadir()).await?;
Command::new("reboot") Command::new("reboot")
.invoke(embassy::ErrorKind::Unknown) .invoke(embassy::ErrorKind::Unknown)
.await?; .await?;
} }
tracing::info!("Loaded Disk"); tracing::info!("Loaded Disk");
embassy::init::init(&cfg, &get_product_key().await?).await?; embassy::init::init(&cfg).await?;
} }
Ok(()) Ok(())
@@ -128,6 +120,13 @@ async fn run_script_if_exists<P: AsRef<Path>>(path: P) {
#[instrument] #[instrument]
async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> { async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> {
if tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() {
tokio::fs::remove_file(STANDBY_MODE_PATH).await?;
Command::new("sync").invoke(ErrorKind::Filesystem).await?;
embassy::sound::SHUTDOWN.play().await?;
futures::future::pending::<()>().await;
}
embassy::sound::BEP.play().await?; embassy::sound::BEP.play().await?;
run_script_if_exists("/embassy-os/preinit.sh").await; run_script_if_exists("/embassy-os/preinit.sh").await;
@@ -210,7 +209,7 @@ fn main() {
let matches = clap::App::new("embassyd") let matches = clap::App::new("embassyd")
.arg( .arg(
clap::Arg::with_name("config") clap::Arg::with_name("config")
.short("c") .short('c')
.long("config") .long("config")
.takes_value(true), .takes_value(true),
) )

View File

@@ -6,21 +6,25 @@ use rpc_toolkit::run_cli;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use serde_json::Value; use serde_json::Value;
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
fn inner_main() -> Result<(), Error> { fn inner_main() -> Result<(), Error> {
run_cli!({ run_cli!({
command: embassy::portable_api, command: embassy::portable_api,
app: app => app app: app => app
.name("Embassy SDK") .name("Embassy SDK")
.version(Current::new().semver().to_string().as_str()) .version(&**VERSION_STRING)
.arg( .arg(
clap::Arg::with_name("config") clap::Arg::with_name("config")
.short("c") .short('c')
.long("config") .long("config")
.takes_value(true), .takes_value(true),
), ),
context: matches => { context: matches => {
if let Err(_) = std::env::var("RUST_LOG") { if let Err(_) = std::env::var("RUST_LOG") {
std::env::set_var("RUST_LOG", "embassy=warn"); std::env::set_var("RUST_LOG", "embassy=warn,js_engine=warn");
} }
EmbassyLogger::init(); EmbassyLogger::init();
SdkContext::init(matches)? SdkContext::init(matches)?

View File

@@ -7,6 +7,7 @@ use embassy::core::rpc_continuations::RequestGuid;
use embassy::db::subscribe; use embassy::db::subscribe;
use embassy::middleware::auth::auth; use embassy::middleware::auth::auth;
use embassy::middleware::cors::cors; use embassy::middleware::cors::cors;
use embassy::middleware::db::db as db_middleware;
use embassy::middleware::diagnostic::diagnostic; use embassy::middleware::diagnostic::diagnostic;
#[cfg(feature = "avahi")] #[cfg(feature = "avahi")]
use embassy::net::mdns::MdnsController; use embassy::net::mdns::MdnsController;
@@ -40,7 +41,6 @@ fn err_to_500(e: Error) -> Response<Body> {
#[instrument] #[instrument]
async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> { async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> {
let (rpc_ctx, shutdown) = { let (rpc_ctx, shutdown) = {
embassy::hostname::sync_hostname().await?;
let rpc_ctx = RpcContext::init( let rpc_ctx = RpcContext::init(
cfg_path, cfg_path,
Arc::new( Arc::new(
@@ -81,8 +81,14 @@ async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> {
.expect("send shutdown signal"); .expect("send shutdown signal");
}); });
rpc_ctx.set_nginx_conf(&mut rpc_ctx.db.handle()).await?; let mut db = rpc_ctx.db.handle();
embassy::hostname::sync_hostname(&mut db).await?;
let receipts = embassy::context::rpc::RpcSetNginxReceipts::new(&mut db).await?;
rpc_ctx.set_nginx_conf(&mut db, receipts).await?;
drop(db);
let auth = auth(rpc_ctx.clone()); let auth = auth(rpc_ctx.clone());
let db_middleware = db_middleware(rpc_ctx.clone());
let ctx = rpc_ctx.clone(); let ctx = rpc_ctx.clone();
let server = rpc_server!({ let server = rpc_server!({
command: embassy::main_api, command: embassy::main_api,
@@ -91,6 +97,7 @@ async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> {
middleware: [ middleware: [
cors, cors,
auth, auth,
db_middleware,
] ]
}) })
.with_graceful_shutdown({ .with_graceful_shutdown({
@@ -108,29 +115,6 @@ async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> {
.await .await
}); });
let rev_cache_ctx = rpc_ctx.clone();
let revision_cache_task = tokio::spawn(async move {
let mut sub = rev_cache_ctx.db.subscribe();
let mut shutdown = rev_cache_ctx.shutdown.subscribe();
loop {
let rev = match tokio::select! {
a = sub.recv() => a,
_ = shutdown.recv() => break,
} {
Ok(a) => a,
Err(_) => {
rev_cache_ctx.revision_cache.write().await.truncate(0);
continue;
}
}; // TODO: handle falling behind
let mut cache = rev_cache_ctx.revision_cache.write().await;
cache.push_back(rev);
if cache.len() > rev_cache_ctx.revision_cache_size {
cache.pop_front();
}
}
});
let ws_ctx = rpc_ctx.clone(); let ws_ctx = rpc_ctx.clone();
let ws_server = { let ws_server = {
let builder = Server::bind(&ws_ctx.bind_ws); let builder = Server::bind(&ws_ctx.bind_ws);
@@ -147,6 +131,33 @@ async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> {
"/ws/db" => { "/ws/db" => {
Ok(subscribe(ctx, req).await.unwrap_or_else(err_to_500)) Ok(subscribe(ctx, req).await.unwrap_or_else(err_to_500))
} }
path if path.starts_with("/ws/rpc/") => {
match RequestGuid::from(
path.strip_prefix("/ws/rpc/").unwrap(),
) {
None => {
tracing::debug!("No Guid Path");
Response::builder()
.status(StatusCode::BAD_REQUEST)
.body(Body::empty())
}
Some(guid) => {
match ctx.get_ws_continuation_handler(&guid).await {
Some(cont) => match cont(req).await {
Ok(r) => Ok(r),
Err(e) => Response::builder()
.status(
StatusCode::INTERNAL_SERVER_ERROR,
)
.body(Body::from(format!("{}", e))),
},
_ => Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::empty()),
}
}
}
}
path if path.starts_with("/rest/rpc/") => { path if path.starts_with("/rest/rpc/") => {
match RequestGuid::from( match RequestGuid::from(
path.strip_prefix("/rest/rpc/").unwrap(), path.strip_prefix("/rest/rpc/").unwrap(),
@@ -158,16 +169,12 @@ async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> {
.body(Body::empty()) .body(Body::empty())
} }
Some(guid) => { Some(guid) => {
match ctx match ctx.get_rest_continuation_handler(&guid).await
.rpc_stream_continuations
.lock()
.await
.remove(&guid)
{ {
None => Response::builder() None => Response::builder()
.status(StatusCode::NOT_FOUND) .status(StatusCode::NOT_FOUND)
.body(Body::empty()), .body(Body::empty()),
Some(cont) => match (cont.handler)(req).await { Some(cont) => match cont(req).await {
Ok(r) => Ok(r), Ok(r) => Ok(r),
Err(e) => Response::builder() Err(e) => Response::builder()
.status( .status(
@@ -241,12 +248,6 @@ async fn inner_main(cfg_path: Option<&str>) -> Result<Option<Shutdown>, Error> {
ErrorKind::Unknown ErrorKind::Unknown
)) ))
.map_ok(|_| tracing::debug!("Metrics daemon Shutdown")), .map_ok(|_| tracing::debug!("Metrics daemon Shutdown")),
revision_cache_task
.map_err(|e| Error::new(
eyre!("{}", e).wrap_err("Revision Cache daemon panicked!"),
ErrorKind::Unknown
))
.map_ok(|_| tracing::debug!("Revision Cache daemon Shutdown")),
ws_server ws_server
.map_err(|e| Error::new(e, ErrorKind::Network)) .map_err(|e| Error::new(e, ErrorKind::Network))
.map_ok(|_| tracing::debug!("WebSocket Server Shutdown")), .map_ok(|_| tracing::debug!("WebSocket Server Shutdown")),
@@ -283,7 +284,7 @@ fn main() {
let matches = clap::App::new("embassyd") let matches = clap::App::new("embassyd")
.arg( .arg(
clap::Arg::with_name("config") clap::Arg::with_name("config")
.short("c") .short('c')
.long("config") .long("config")
.takes_value(true), .takes_value(true),
) )
@@ -339,6 +340,7 @@ fn main() {
e, e,
) )
.await?; .await?;
let mut shutdown = ctx.shutdown.subscribe();
rpc_server!({ rpc_server!({
command: embassy::diagnostic_api, command: embassy::diagnostic_api,
context: ctx.clone(), context: ctx.clone(),
@@ -356,7 +358,7 @@ fn main() {
}) })
.await .await
.with_kind(embassy::ErrorKind::Network)?; .with_kind(embassy::ErrorKind::Network)?;
Ok::<_, Error>(None) Ok::<_, Error>(shutdown.recv().await.with_kind(crate::ErrorKind::Unknown)?)
})() })()
.await .await
} }

View File

@@ -7,10 +7,10 @@ use serde::{Deserialize, Serialize};
use tracing::instrument; use tracing::instrument;
use super::{Config, ConfigSpec}; use super::{Config, ConfigSpec};
use crate::action::ActionImplementation;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::dependencies::Dependencies; use crate::dependencies::Dependencies;
use crate::id::ImageId; use crate::id::ImageId;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::status::health_check::HealthCheckId; use crate::status::health_check::HealthCheckId;
use crate::util::Version; use crate::util::Version;
@@ -26,17 +26,22 @@ pub struct ConfigRes {
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)] #[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
pub struct ConfigActions { pub struct ConfigActions {
pub get: ActionImplementation, pub get: PackageProcedure,
pub set: ActionImplementation, pub set: PackageProcedure,
} }
impl ConfigActions { impl ConfigActions {
#[instrument] #[instrument]
pub fn validate(&self, volumes: &Volumes, image_ids: &BTreeSet<ImageId>) -> Result<(), Error> { pub fn validate(
&self,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.get self.get
.validate(volumes, image_ids, true) .validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Get"))?; .with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Get"))?;
self.set self.set
.validate(volumes, image_ids, true) .validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Set"))?; .with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Set"))?;
Ok(()) Ok(())
} }
@@ -53,7 +58,7 @@ impl ConfigActions {
ctx, ctx,
pkg_id, pkg_id,
pkg_version, pkg_version,
Some("GetConfig"), ProcedureName::GetConfig,
volumes, volumes,
None::<()>, None::<()>,
false, false,
@@ -81,7 +86,7 @@ impl ConfigActions {
ctx, ctx,
pkg_id, pkg_id,
pkg_version, pkg_version,
Some("SetConfig"), ProcedureName::SetConfig,
volumes, volumes,
Some(input), Some(input),
false, false,
@@ -107,6 +112,7 @@ impl ConfigActions {
#[derive(Debug, Deserialize, Serialize)] #[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct SetResult { pub struct SetResult {
#[serde(default)]
#[serde(deserialize_with = "crate::util::serde::deserialize_from_str_opt")] #[serde(deserialize_with = "crate::util::serde::deserialize_from_str_opt")]
#[serde(serialize_with = "crate::util::serde::serialize_display_opt")] #[serde(serialize_with = "crate::util::serde::serialize_display_opt")]
pub signal: Option<Signal>, pub signal: Option<Signal>,

View File

@@ -6,7 +6,7 @@ use color_eyre::eyre::eyre;
use futures::future::{BoxFuture, FutureExt}; use futures::future::{BoxFuture, FutureExt};
use indexmap::IndexSet; use indexmap::IndexSet;
use itertools::Itertools; use itertools::Itertools;
use patch_db::{DbHandle, LockType}; use patch_db::{DbHandle, LockReceipt, LockTarget, LockTargetId, LockType, Verifier};
use rand::SeedableRng; use rand::SeedableRng;
use regex::Regex; use regex::Regex;
use rpc_toolkit::command; use rpc_toolkit::command;
@@ -14,17 +14,17 @@ use serde_json::Value;
use tracing::instrument; use tracing::instrument;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::db::model::CurrentDependencyInfo; use crate::db::model::{CurrentDependencies, CurrentDependencyInfo, CurrentDependents};
use crate::db::util::WithRevision;
use crate::dependencies::{ use crate::dependencies::{
add_dependent_to_current_dependents_lists, break_transitive, heal_all_dependents_transitive, add_dependent_to_current_dependents_lists, break_transitive, heal_all_dependents_transitive,
BreakageRes, DependencyError, DependencyErrors, TaggedDependencyError, BreakTransitiveReceipts, BreakageRes, Dependencies, DependencyConfig, DependencyError,
DependencyErrors, DependencyReceipt, TaggedDependencyError, TryHealReceipts,
}; };
use crate::install::cleanup::remove_from_current_dependents_lists; use crate::install::cleanup::{remove_from_current_dependents_lists, UpdateDependencyReceipts};
use crate::s9pk::manifest::{Manifest, PackageId}; use crate::s9pk::manifest::{Manifest, PackageId};
use crate::util::display_none; use crate::util::display_none;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat}; use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
use crate::{Error, ResultExt as _}; use crate::Error;
pub mod action; pub mod action;
pub mod spec; pub mod spec;
@@ -33,8 +33,8 @@ pub mod util;
pub use spec::{ConfigSpec, Defaultable}; pub use spec::{ConfigSpec, Defaultable};
use util::NumRange; use util::NumRange;
use self::action::ConfigRes; use self::action::{ConfigActions, ConfigRes};
use self::spec::{PackagePointerSpec, ValueSpecPointer}; use self::spec::{ConfigPointerReceipts, PackagePointerSpec, ValueSpecPointer};
pub type Config = serde_json::Map<String, Value>; pub type Config = serde_json::Map<String, Value>;
pub trait TypeOf { pub trait TypeOf {
@@ -163,6 +163,55 @@ pub fn config(#[arg] id: PackageId) -> Result<PackageId, Error> {
Ok(id) Ok(id)
} }
pub struct ConfigGetReceipts {
manifest_volumes: LockReceipt<crate::volume::Volumes, ()>,
manifest_version: LockReceipt<crate::util::Version, ()>,
manifest_config: LockReceipt<Option<ConfigActions>, ()>,
}
impl ConfigGetReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle, id: &PackageId) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, id);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<LockTargetId>,
id: &PackageId,
) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let manifest_version = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.manifest().version())
.make_locker(LockType::Write)
.add_to_keys(locks);
let manifest_volumes = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.manifest().volumes())
.make_locker(LockType::Write)
.add_to_keys(locks);
let manifest_config = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.manifest().config())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
manifest_volumes: manifest_volumes.verify(skeleton_key)?,
manifest_version: manifest_version.verify(skeleton_key)?,
manifest_config: manifest_config.verify(skeleton_key)?,
})
}
}
}
#[command(display(display_serializable))] #[command(display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip(ctx))]
pub async fn get( pub async fn get(
@@ -173,34 +222,22 @@ pub async fn get(
format: Option<IoFormat>, format: Option<IoFormat>,
) -> Result<ConfigRes, Error> { ) -> Result<ConfigRes, Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let pkg_model = crate::db::DatabaseModel::new() let receipts = ConfigGetReceipts::new(&mut db, &id).await?;
.package_data() let action = receipts
.idx_model(&id) .manifest_config
.and_then(|m| m.installed()) .get(&mut db)
.expect(&mut db)
.await
.with_kind(crate::ErrorKind::NotFound)?;
let action = pkg_model
.clone()
.manifest()
.config()
.get(&mut db, true)
.await? .await?
.to_owned()
.ok_or_else(|| Error::new(eyre!("{} has no config", id), crate::ErrorKind::NotFound))?; .ok_or_else(|| Error::new(eyre!("{} has no config", id), crate::ErrorKind::NotFound))?;
let version = pkg_model
.clone() let volumes = receipts.manifest_volumes.get(&mut db).await?;
.manifest() let version = receipts.manifest_version.get(&mut db).await?;
.version() action.get(&ctx, &id, &version, &volumes).await
.get(&mut db, true)
.await?;
let volumes = pkg_model.manifest().volumes().get(&mut db, true).await?;
action.get(&ctx, &id, &*version, &*volumes).await
} }
#[command( #[command(
subcommands(self(set_impl(async, context(RpcContext))), set_dry), subcommands(self(set_impl(async, context(RpcContext))), set_dry),
display(display_none) display(display_none),
metadata(sync_db = true)
)] )]
#[instrument] #[instrument]
pub fn set( pub fn set(
@@ -210,25 +247,171 @@ pub fn set(
format: Option<IoFormat>, format: Option<IoFormat>,
#[arg(long = "timeout")] timeout: Option<crate::util::serde::Duration>, #[arg(long = "timeout")] timeout: Option<crate::util::serde::Duration>,
#[arg(stdin, parse(parse_stdin_deserializable))] config: Option<Config>, #[arg(stdin, parse(parse_stdin_deserializable))] config: Option<Config>,
#[arg(rename = "expire-id", long = "expire-id")] expire_id: Option<String>, ) -> Result<(PackageId, Option<Config>, Option<Duration>), Error> {
) -> Result<(PackageId, Option<Config>, Option<Duration>, Option<String>), Error> { Ok((id, config, timeout.map(|d| *d)))
Ok((id, config, timeout.map(|d| *d), expire_id)) }
/// So, the new locking finds all the possible locks and lifts them up into a bundle of locks.
/// Then this bundle will be passed down into the functions that will need to touch the db, and
/// instead of doing the locks down in the system, we have already done the locks and can
/// do the operation on the db.
/// An UnlockedLock has two types, the type of setting and getting from the db, and the second type
/// is the keys that we need to insert on getting/setting because we have included wild cards into the paths.
pub struct ConfigReceipts {
pub dependency_receipt: DependencyReceipt,
pub config_receipts: ConfigPointerReceipts,
pub update_dependency_receipts: UpdateDependencyReceipts,
pub try_heal_receipts: TryHealReceipts,
pub break_transitive_receipts: BreakTransitiveReceipts,
configured: LockReceipt<bool, String>,
config_actions: LockReceipt<ConfigActions, String>,
dependencies: LockReceipt<Dependencies, String>,
volumes: LockReceipt<crate::volume::Volumes, String>,
version: LockReceipt<crate::util::Version, String>,
manifest: LockReceipt<Manifest, String>,
system_pointers: LockReceipt<Vec<spec::SystemPointerSpec>, String>,
pub current_dependents: LockReceipt<CurrentDependents, String>,
pub current_dependencies: LockReceipt<CurrentDependencies, String>,
dependency_errors: LockReceipt<DependencyErrors, String>,
manifest_dependencies_config: LockReceipt<DependencyConfig, (String, String)>,
}
impl ConfigReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(locks: &mut Vec<LockTargetId>) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let dependency_receipt = DependencyReceipt::setup(locks);
let config_receipts = ConfigPointerReceipts::setup(locks);
let update_dependency_receipts = UpdateDependencyReceipts::setup(locks);
let break_transitive_receipts = BreakTransitiveReceipts::setup(locks);
let try_heal_receipts = TryHealReceipts::setup(locks);
let configured: LockTarget<bool, String> = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.status().configured())
.make_locker(LockType::Write)
.add_to_keys(locks);
let config_actions = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.and_then(|x| x.manifest().config())
.make_locker(LockType::Read)
.add_to_keys(locks);
let dependencies = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest().dependencies())
.make_locker(LockType::Read)
.add_to_keys(locks);
let volumes = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest().volumes())
.make_locker(LockType::Read)
.add_to_keys(locks);
let version = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest().version())
.make_locker(LockType::Read)
.add_to_keys(locks);
let manifest = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest())
.make_locker(LockType::Read)
.add_to_keys(locks);
let system_pointers = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.system_pointers())
.make_locker(LockType::Write)
.add_to_keys(locks);
let current_dependents = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.current_dependents())
.make_locker(LockType::Write)
.add_to_keys(locks);
let current_dependencies = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.current_dependencies())
.make_locker(LockType::Write)
.add_to_keys(locks);
let dependency_errors = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.status().dependency_errors())
.make_locker(LockType::Write)
.add_to_keys(locks);
let manifest_dependencies_config = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.and_then(|x| x.manifest().dependencies().star().config())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
dependency_receipt: dependency_receipt(skeleton_key)?,
config_receipts: config_receipts(skeleton_key)?,
try_heal_receipts: try_heal_receipts(skeleton_key)?,
break_transitive_receipts: break_transitive_receipts(skeleton_key)?,
update_dependency_receipts: update_dependency_receipts(skeleton_key)?,
configured: configured.verify(skeleton_key)?,
config_actions: config_actions.verify(skeleton_key)?,
dependencies: dependencies.verify(skeleton_key)?,
volumes: volumes.verify(skeleton_key)?,
version: version.verify(skeleton_key)?,
manifest: manifest.verify(skeleton_key)?,
system_pointers: system_pointers.verify(skeleton_key)?,
current_dependents: current_dependents.verify(skeleton_key)?,
current_dependencies: current_dependencies.verify(skeleton_key)?,
dependency_errors: dependency_errors.verify(skeleton_key)?,
manifest_dependencies_config: manifest_dependencies_config.verify(skeleton_key)?,
})
}
}
} }
#[command(rename = "dry", display(display_serializable))] #[command(rename = "dry", display(display_serializable))]
#[instrument(skip(ctx))] #[instrument(skip(ctx))]
pub async fn set_dry( pub async fn set_dry(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[parent_data] (id, config, timeout, _): ( #[parent_data] (id, config, timeout): (PackageId, Option<Config>, Option<Duration>),
PackageId,
Option<Config>,
Option<Duration>,
Option<String>,
),
) -> Result<BreakageRes, Error> { ) -> Result<BreakageRes, Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let mut breakages = BTreeMap::new(); let mut breakages = BTreeMap::new();
let locks = ConfigReceipts::new(&mut tx).await?;
configure( configure(
&ctx, &ctx,
&mut tx, &mut tx,
@@ -238,20 +421,11 @@ pub async fn set_dry(
true, true,
&mut BTreeMap::new(), &mut BTreeMap::new(),
&mut breakages, &mut breakages,
&locks,
) )
.await?; .await?;
crate::db::DatabaseModel::new()
.package_data() locks.configured.set(&mut tx, true, &id).await?;
.idx_model(&id)
.expect(&mut tx)
.await?
.installed()
.expect(&mut tx)
.await?
.status()
.configured()
.put(&mut tx, &true)
.await?;
tx.abort().await?; tx.abort().await?;
Ok(BreakageRes(breakages)) Ok(BreakageRes(breakages))
} }
@@ -259,11 +433,12 @@ pub async fn set_dry(
#[instrument(skip(ctx))] #[instrument(skip(ctx))]
pub async fn set_impl( pub async fn set_impl(
ctx: RpcContext, ctx: RpcContext,
(id, config, timeout, expire_id): (PackageId, Option<Config>, Option<Duration>, Option<String>), (id, config, timeout): (PackageId, Option<Config>, Option<Duration>),
) -> Result<WithRevision<()>, Error> { ) -> Result<(), Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let mut breakages = BTreeMap::new(); let mut breakages = BTreeMap::new();
let locks = ConfigReceipts::new(&mut tx).await?;
configure( configure(
&ctx, &ctx,
&mut tx, &mut tx,
@@ -273,42 +448,34 @@ pub async fn set_impl(
false, false,
&mut BTreeMap::new(), &mut BTreeMap::new(),
&mut breakages, &mut breakages,
&locks,
) )
.await?; .await?;
Ok(WithRevision { tx.commit().await?;
response: (), Ok(())
revision: tx.commit(expire_id).await?,
})
} }
#[instrument(skip(ctx, db))] #[instrument(skip(ctx, db, receipts))]
pub async fn configure<Db: DbHandle>( pub async fn configure<'a, Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db, db: &'a mut Db,
id: &PackageId, id: &PackageId,
config: Option<Config>, config: Option<Config>,
timeout: &Option<Duration>, timeout: &Option<Duration>,
dry_run: bool, dry_run: bool,
overrides: &mut BTreeMap<PackageId, Config>, overrides: &mut BTreeMap<PackageId, Config>,
breakages: &mut BTreeMap<PackageId, TaggedDependencyError>, breakages: &mut BTreeMap<PackageId, TaggedDependencyError>,
receipts: &ConfigReceipts,
) -> Result<(), Error> { ) -> Result<(), Error> {
configure_rec(ctx, db, id, config, timeout, dry_run, overrides, breakages).await?; configure_rec(
crate::db::DatabaseModel::new() ctx, db, id, config, timeout, dry_run, overrides, breakages, receipts,
.package_data() )
.idx_model(&id) .await?;
.expect(db) receipts.configured.set(db, true, &id).await?;
.await?
.installed()
.expect(db)
.await?
.status()
.configured()
.put(db, &true)
.await?;
Ok(()) Ok(())
} }
#[instrument(skip(ctx, db))] #[instrument(skip(ctx, db, receipts))]
pub fn configure_rec<'a, Db: DbHandle>( pub fn configure_rec<'a, Db: DbHandle>(
ctx: &'a RpcContext, ctx: &'a RpcContext,
db: &'a mut Db, db: &'a mut Db,
@@ -318,48 +485,33 @@ pub fn configure_rec<'a, Db: DbHandle>(
dry_run: bool, dry_run: bool,
overrides: &'a mut BTreeMap<PackageId, Config>, overrides: &'a mut BTreeMap<PackageId, Config>,
breakages: &'a mut BTreeMap<PackageId, TaggedDependencyError>, breakages: &'a mut BTreeMap<PackageId, TaggedDependencyError>,
receipts: &'a ConfigReceipts,
) -> BoxFuture<'a, Result<(), Error>> { ) -> BoxFuture<'a, Result<(), Error>> {
async move { async move {
crate::db::DatabaseModel::new()
.package_data()
.lock(db, LockType::Write)
.await?;
// fetch data from db // fetch data from db
let pkg_model = crate::db::DatabaseModel::new() let action = receipts
.package_data() .config_actions
.idx_model(id) .get(db, id)
.and_then(|m| m.installed())
.expect(db)
.await
.with_kind(crate::ErrorKind::NotFound)?;
let action = pkg_model
.clone()
.manifest()
.config()
.get(db, true)
.await? .await?
.to_owned() .ok_or_else(not_found)?;
.ok_or_else(|| Error::new(eyre!("{} has no config", id), crate::ErrorKind::NotFound))?; let dependencies = receipts
let version = pkg_model.clone().manifest().version().get(db, true).await?; .dependencies
let dependencies = pkg_model .get(db, id)
.clone() .await?
.manifest() .ok_or_else(not_found)?;
.dependencies() let volumes = receipts.volumes.get(db, id).await?.ok_or_else(not_found)?;
.get(db, true) let is_needs_config = !receipts
.await?; .configured
let volumes = pkg_model.clone().manifest().volumes().get(db, true).await?; .get(db, id)
let is_needs_config = !*pkg_model .await?
.clone() .ok_or_else(not_found)?;
.status() let version = receipts.version.get(db, id).await?.ok_or_else(not_found)?;
.configured()
.get(db, true)
.await?;
// get current config and current spec // get current config and current spec
let ConfigRes { let ConfigRes {
config: old_config, config: old_config,
spec, spec,
} = action.get(ctx, id, &*version, &*volumes).await?; } = action.get(ctx, id, &version, &volumes).await?;
// determine new config to use // determine new config to use
let mut config = if let Some(config) = config.or_else(|| old_config.clone()) { let mut config = if let Some(config) = config.or_else(|| old_config.clone()) {
@@ -368,45 +520,49 @@ pub fn configure_rec<'a, Db: DbHandle>(
spec.gen(&mut rand::rngs::StdRng::from_entropy(), timeout)? spec.gen(&mut rand::rngs::StdRng::from_entropy(), timeout)?
}; };
let manifest = crate::db::DatabaseModel::new() let manifest = receipts.manifest.get(db, id).await?.ok_or_else(not_found)?;
.package_data()
.idx_model(id)
.and_then(|m| m.installed())
.map::<_, Manifest>(|i| i.manifest())
.expect(db)
.await?
.get(db, true)
.await
.with_kind(crate::ErrorKind::NotFound)?;
spec.validate(&*manifest)?; spec.validate(&manifest)?;
spec.matches(&config)?; // check that new config matches spec spec.matches(&config)?; // check that new config matches spec
spec.update(ctx, db, &*manifest, &*overrides, &mut config) spec.update(
.await?; // dereference pointers in the new config ctx,
db,
&manifest,
&*overrides,
&mut config,
&receipts.config_receipts,
)
.await?; // dereference pointers in the new config
// create backreferences to pointers // create backreferences to pointers
let mut sys = pkg_model.clone().system_pointers().get_mut(db).await?; let mut sys = receipts
.system_pointers
.get(db, &id)
.await?
.ok_or_else(not_found)?;
sys.truncate(0); sys.truncate(0);
let mut current_dependencies: BTreeMap<PackageId, CurrentDependencyInfo> = dependencies let mut current_dependencies: CurrentDependencies = CurrentDependencies(
.0 dependencies
.iter() .0
.filter_map(|(id, info)| { .iter()
if info.requirement.required() { .filter_map(|(id, info)| {
Some((id.clone(), CurrentDependencyInfo::default())) if info.requirement.required() {
} else { Some((id.clone(), CurrentDependencyInfo::default()))
None } else {
} None
}) }
.collect(); })
.collect(),
);
for ptr in spec.pointers(&config)? { for ptr in spec.pointers(&config)? {
match ptr { match ptr {
ValueSpecPointer::Package(pkg_ptr) => { ValueSpecPointer::Package(pkg_ptr) => {
if let Some(current_dependency) = if let Some(current_dependency) =
current_dependencies.get_mut(pkg_ptr.package_id()) current_dependencies.0.get_mut(pkg_ptr.package_id())
{ {
current_dependency.pointers.push(pkg_ptr); current_dependency.pointers.push(pkg_ptr);
} else { } else {
current_dependencies.insert( current_dependencies.0.insert(
pkg_ptr.package_id().to_owned(), pkg_ptr.package_id().to_owned(),
CurrentDependencyInfo { CurrentDependencyInfo {
pointers: vec![pkg_ptr], pointers: vec![pkg_ptr],
@@ -418,20 +574,20 @@ pub fn configure_rec<'a, Db: DbHandle>(
ValueSpecPointer::System(s) => sys.push(s), ValueSpecPointer::System(s) => sys.push(s),
} }
} }
sys.save(db).await?; receipts.system_pointers.set(db, sys, &id).await?;
let signal = if !dry_run { let signal = if !dry_run {
// run config action // run config action
let res = action let res = action
.set(ctx, id, &*version, &*dependencies, &*volumes, &config) .set(ctx, id, &version, &dependencies, &volumes, &config)
.await?; .await?;
// track dependencies with no pointers // track dependencies with no pointers
for (package_id, health_checks) in res.depends_on.into_iter() { for (package_id, health_checks) in res.depends_on.into_iter() {
if let Some(current_dependency) = current_dependencies.get_mut(&package_id) { if let Some(current_dependency) = current_dependencies.0.get_mut(&package_id) {
current_dependency.health_checks.extend(health_checks); current_dependency.health_checks.extend(health_checks);
} else { } else {
current_dependencies.insert( current_dependencies.0.insert(
package_id, package_id,
CurrentDependencyInfo { CurrentDependencyInfo {
pointers: Vec::new(), pointers: Vec::new(),
@@ -442,79 +598,111 @@ pub fn configure_rec<'a, Db: DbHandle>(
} }
// track dependency health checks // track dependency health checks
current_dependencies = current_dependencies current_dependencies = current_dependencies.map(|x| {
.into_iter() x.into_iter()
.filter(|(dep_id, _)| { .filter(|(dep_id, _)| {
if dep_id != id && !manifest.dependencies.0.contains_key(dep_id) { if dep_id != id && !manifest.dependencies.0.contains_key(dep_id) {
tracing::warn!("Illegal dependency specified: {}", dep_id); tracing::warn!("Illegal dependency specified: {}", dep_id);
false false
} else { } else {
true true
} }
}) })
.collect(); .collect()
});
res.signal res.signal
} else { } else {
None None
}; };
// update dependencies // update dependencies
let mut deps = pkg_model.clone().current_dependencies().get_mut(db).await?; let prev_current_dependencies = receipts
remove_from_current_dependents_lists(db, id, deps.keys()).await?; // remove previous .current_dependencies
add_dependent_to_current_dependents_lists(db, id, &current_dependencies).await?; // add new .get(db, &id)
current_dependencies.remove(id); .await?
*deps = current_dependencies.clone(); .unwrap_or_default();
deps.save(db).await?; remove_from_current_dependents_lists(
let mut errs = pkg_model db,
.clone() id,
.status() &prev_current_dependencies,
.dependency_errors() &receipts.current_dependents,
.get_mut(db) )
.await?; // remove previous
add_dependent_to_current_dependents_lists(
db,
id,
&current_dependencies,
&receipts.current_dependents,
)
.await?; // add new
current_dependencies.0.remove(id);
receipts
.current_dependencies
.set(db, current_dependencies.clone(), &id)
.await?; .await?;
*errs = DependencyErrors::init(ctx, db, &*manifest, &current_dependencies).await?;
errs.save(db).await?; let errs = receipts
.dependency_errors
.get(db, &id)
.await?
.ok_or_else(not_found)?;
tracing::warn!("Dependency Errors: {:?}", errs);
let errs = DependencyErrors::init(
ctx,
db,
&manifest,
&current_dependencies,
&receipts.dependency_receipt.try_heal,
)
.await?;
receipts.dependency_errors.set(db, errs, &id).await?;
// cache current config for dependents // cache current config for dependents
overrides.insert(id.clone(), config.clone()); overrides.insert(id.clone(), config.clone());
// handle dependents // handle dependents
let dependents = pkg_model.clone().current_dependents().get(db, true).await?; let dependents = receipts
.current_dependents
.get(db, id)
.await?
.ok_or_else(not_found)?;
let prev = if is_needs_config { None } else { old_config } let prev = if is_needs_config { None } else { old_config }
.map(Value::Object) .map(Value::Object)
.unwrap_or_default(); .unwrap_or_default();
let next = Value::Object(config.clone()); let next = Value::Object(config.clone());
for (dependent, dep_info) in dependents.iter().filter(|(dep_id, _)| dep_id != &id) { for (dependent, dep_info) in dependents.0.iter().filter(|(dep_id, _)| dep_id != &id) {
// check if config passes dependent check // check if config passes dependent check
let dependent_model = crate::db::DatabaseModel::new() if let Some(cfg) = receipts
.package_data() .manifest_dependencies_config
.idx_model(dependent) .get(db, (&dependent, &id))
.and_then(|pkg| pkg.installed())
.expect(db)
.await?;
if let Some(cfg) = &*dependent_model
.clone()
.manifest()
.dependencies()
.idx_model(id)
.expect(db)
.await?
.config()
.get(db, true)
.await? .await?
{ {
let manifest = dependent_model.clone().manifest().get(db, true).await?; let manifest = receipts
.manifest
.get(db, &dependent)
.await?
.ok_or_else(not_found)?;
if let Err(error) = cfg if let Err(error) = cfg
.check( .check(
ctx, ctx,
dependent, dependent,
&manifest.version, &manifest.version,
&manifest.volumes, &manifest.volumes,
id,
&config, &config,
) )
.await? .await?
{ {
let dep_err = DependencyError::ConfigUnsatisfied { error }; let dep_err = DependencyError::ConfigUnsatisfied { error };
break_transitive(db, dependent, id, dep_err, breakages).await?; break_transitive(
db,
dependent,
id,
dep_err,
breakages,
&receipts.break_transitive_receipts,
)
.await?;
} }
// handle backreferences // handle backreferences
@@ -523,6 +711,7 @@ pub fn configure_rec<'a, Db: DbHandle>(
if cfg_ptr.select(&next) != cfg_ptr.select(&prev) { if cfg_ptr.select(&next) != cfg_ptr.select(&prev) {
if let Err(e) = configure_rec( if let Err(e) = configure_rec(
ctx, db, dependent, None, timeout, dry_run, overrides, breakages, ctx, db, dependent, None, timeout, dry_run, overrides, breakages,
receipts,
) )
.await .await
{ {
@@ -535,6 +724,7 @@ pub fn configure_rec<'a, Db: DbHandle>(
error: format!("{}", e), error: format!("{}", e),
}, },
breakages, breakages,
&receipts.break_transitive_receipts,
) )
.await?; .await?;
} else { } else {
@@ -544,7 +734,7 @@ pub fn configure_rec<'a, Db: DbHandle>(
} }
} }
} }
heal_all_dependents_transitive(ctx, db, id).await?; heal_all_dependents_transitive(ctx, db, id, &receipts.dependency_receipt).await?;
} }
} }
@@ -568,3 +758,67 @@ pub fn configure_rec<'a, Db: DbHandle>(
} }
.boxed() .boxed()
} }
#[instrument]
pub fn not_found() -> Error {
Error::new(eyre!("Could not find"), crate::ErrorKind::Incoherent)
}
/// We want to have a double check that the paths are what we expect them to be.
/// Found that earlier the paths where not what we expected them to be.
#[tokio::test]
async fn ensure_creation_of_config_paths_makes_sense() {
let mut fake = patch_db::test_utils::NoOpDb();
let config_locks = ConfigReceipts::new(&mut fake).await.unwrap();
assert_eq!(
&format!("{}", config_locks.configured.lock.glob),
"/package-data/*/installed/status/configured"
);
assert_eq!(
&format!("{}", config_locks.config_actions.lock.glob),
"/package-data/*/installed/manifest/config"
);
assert_eq!(
&format!("{}", config_locks.dependencies.lock.glob),
"/package-data/*/installed/manifest/dependencies"
);
assert_eq!(
&format!("{}", config_locks.volumes.lock.glob),
"/package-data/*/installed/manifest/volumes"
);
assert_eq!(
&format!("{}", config_locks.version.lock.glob),
"/package-data/*/installed/manifest/version"
);
assert_eq!(
&format!("{}", config_locks.volumes.lock.glob),
"/package-data/*/installed/manifest/volumes"
);
assert_eq!(
&format!("{}", config_locks.manifest.lock.glob),
"/package-data/*/installed/manifest"
);
assert_eq!(
&format!("{}", config_locks.manifest.lock.glob),
"/package-data/*/installed/manifest"
);
assert_eq!(
&format!("{}", config_locks.system_pointers.lock.glob),
"/package-data/*/installed/system-pointers"
);
assert_eq!(
&format!("{}", config_locks.current_dependents.lock.glob),
"/package-data/*/installed/current-dependents"
);
assert_eq!(
&format!("{}", config_locks.dependency_errors.lock.glob),
"/package-data/*/installed/status/dependency-errors"
);
assert_eq!(
&format!("{}", config_locks.manifest_dependencies_config.lock.glob),
"/package-data/*/installed/manifest/dependencies/*/config"
);
assert_eq!(
&format!("{}", config_locks.system_pointers.lock.glob),
"/package-data/*/installed/system-pointers"
);
}

View File

@@ -12,12 +12,13 @@ use async_trait::async_trait;
use indexmap::{IndexMap, IndexSet}; use indexmap::{IndexMap, IndexSet};
use itertools::Itertools; use itertools::Itertools;
use jsonpath_lib::Compiled as CompiledJsonPath; use jsonpath_lib::Compiled as CompiledJsonPath;
use patch_db::{DbHandle, OptionModel}; use patch_db::{DbHandle, LockReceipt, LockType};
use rand::{CryptoRng, Rng}; use rand::{CryptoRng, Rng};
use regex::Regex; use regex::Regex;
use serde::de::{MapAccess, Visitor};
use serde::{Deserialize, Deserializer, Serialize, Serializer}; use serde::{Deserialize, Deserializer, Serialize, Serializer};
use serde_json::{Number, Value}; use serde_json::{Number, Value};
use sqlx::SqlitePool; use sqlx::PgPool;
use super::util::{self, CharSet, NumRange, UniqueBy, STATIC_NULL}; use super::util::{self, CharSet, NumRange, UniqueBy, STATIC_NULL};
use super::{Config, MatchError, NoMatchWithPath, TimeoutError, TypeOf}; use super::{Config, MatchError, NoMatchWithPath, TimeoutError, TypeOf};
@@ -44,6 +45,7 @@ pub trait ValueSpec {
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError>; ) -> Result<(), ConfigurationError>;
// returns all pointers that are live in the provided config // returns all pointers that are live in the provided config
fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath>; fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath>;
@@ -160,9 +162,10 @@ where
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
self.inner self.inner
.update(ctx, db, manifest, config_overrides, value) .update(ctx, db, manifest, config_overrides, value, receipts)
.await .await
} }
fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> { fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> {
@@ -204,9 +207,10 @@ where
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
self.inner self.inner
.update(ctx, db, manifest, config_overrides, value) .update(ctx, db, manifest, config_overrides, value, receipts)
.await .await
} }
fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> { fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> {
@@ -281,9 +285,10 @@ where
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
self.inner self.inner
.update(ctx, db, manifest, config_overrides, value) .update(ctx, db, manifest, config_overrides, value, receipts)
.await .await
} }
fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> { fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> {
@@ -343,7 +348,7 @@ pub enum ValueSpecAny {
Pointer(WithDescription<ValueSpecPointer>), Pointer(WithDescription<ValueSpecPointer>),
} }
impl ValueSpecAny { impl ValueSpecAny {
pub fn name<'a>(&'a self) -> &'a str { pub fn name(&self) -> &'_ str {
match self { match self {
ValueSpecAny::Boolean(b) => b.name.as_str(), ValueSpecAny::Boolean(b) => b.name.as_str(),
ValueSpecAny::Enum(e) => e.name.as_str(), ValueSpecAny::Enum(e) => e.name.as_str(),
@@ -395,16 +400,41 @@ impl ValueSpec for ValueSpecAny {
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
match self { match self {
ValueSpecAny::Boolean(a) => a.update(ctx, db, manifest, config_overrides, value).await, ValueSpecAny::Boolean(a) => {
ValueSpecAny::Enum(a) => a.update(ctx, db, manifest, config_overrides, value).await, a.update(ctx, db, manifest, config_overrides, value, receipts)
ValueSpecAny::List(a) => a.update(ctx, db, manifest, config_overrides, value).await, .await
ValueSpecAny::Number(a) => a.update(ctx, db, manifest, config_overrides, value).await, }
ValueSpecAny::Object(a) => a.update(ctx, db, manifest, config_overrides, value).await, ValueSpecAny::Enum(a) => {
ValueSpecAny::String(a) => a.update(ctx, db, manifest, config_overrides, value).await, a.update(ctx, db, manifest, config_overrides, value, receipts)
ValueSpecAny::Union(a) => a.update(ctx, db, manifest, config_overrides, value).await, .await
ValueSpecAny::Pointer(a) => a.update(ctx, db, manifest, config_overrides, value).await, }
ValueSpecAny::List(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
ValueSpecAny::Number(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
ValueSpecAny::Object(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
ValueSpecAny::String(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
ValueSpecAny::Union(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
ValueSpecAny::Pointer(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
} }
} }
fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> { fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> {
@@ -489,6 +519,7 @@ impl ValueSpec for ValueSpecBoolean {
_manifest: &Manifest, _manifest: &Manifest,
_config_overrides: &BTreeMap<PackageId, Config>, _config_overrides: &BTreeMap<PackageId, Config>,
_value: &mut Value, _value: &mut Value,
_receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
Ok(()) Ok(())
} }
@@ -578,6 +609,7 @@ impl ValueSpec for ValueSpecEnum {
_manifest: &Manifest, _manifest: &Manifest,
_config_overrides: &BTreeMap<PackageId, Config>, _config_overrides: &BTreeMap<PackageId, Config>,
_value: &mut Value, _value: &mut Value,
_receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
Ok(()) Ok(())
} }
@@ -664,12 +696,13 @@ where
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
if let Value::Array(ref mut ls) = value { if let Value::Array(ref mut ls) = value {
for (i, val) in ls.into_iter().enumerate() { for (i, val) in ls.into_iter().enumerate() {
match self match self
.spec .spec
.update(ctx, db, manifest, config_overrides, val) .update(ctx, db, manifest, config_overrides, val, receipts)
.await .await
{ {
Err(ConfigurationError::NoMatch(e)) => { Err(ConfigurationError::NoMatch(e)) => {
@@ -771,13 +804,29 @@ impl ValueSpec for ValueSpecList {
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
match self { match self {
ValueSpecList::Enum(a) => a.update(ctx, db, manifest, config_overrides, value).await, ValueSpecList::Enum(a) => {
ValueSpecList::Number(a) => a.update(ctx, db, manifest, config_overrides, value).await, a.update(ctx, db, manifest, config_overrides, value, receipts)
ValueSpecList::Object(a) => a.update(ctx, db, manifest, config_overrides, value).await, .await
ValueSpecList::String(a) => a.update(ctx, db, manifest, config_overrides, value).await, }
ValueSpecList::Union(a) => a.update(ctx, db, manifest, config_overrides, value).await, ValueSpecList::Number(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
ValueSpecList::Object(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
ValueSpecList::String(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
ValueSpecList::Union(a) => {
a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
}
} }
} }
fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> { fn pointers(&self, value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> {
@@ -898,6 +947,7 @@ impl ValueSpec for ValueSpecNumber {
_manifest: &Manifest, _manifest: &Manifest,
_config_overrides: &BTreeMap<PackageId, Config>, _config_overrides: &BTreeMap<PackageId, Config>,
_value: &mut Value, _value: &mut Value,
_receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
Ok(()) Ok(())
} }
@@ -961,10 +1011,11 @@ impl ValueSpec for ValueSpecObject {
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
if let Value::Object(o) = value { if let Value::Object(o) = value {
self.spec self.spec
.update(ctx, db, manifest, config_overrides, o) .update(ctx, db, manifest, config_overrides, o, receipts)
.await .await
} else { } else {
Err(ConfigurationError::NoMatch(NoMatchWithPath::new( Err(ConfigurationError::NoMatch(NoMatchWithPath::new(
@@ -1063,16 +1114,20 @@ impl ConfigSpec {
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
cfg: &mut Config, cfg: &mut Config,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
for (k, vs) in self.0.iter() { for (k, vs) in self.0.iter() {
match cfg.get_mut(k) { match cfg.get_mut(k) {
None => { None => {
let mut v = Value::Null; let mut v = Value::Null;
vs.update(ctx, db, manifest, config_overrides, &mut v) vs.update(ctx, db, manifest, config_overrides, &mut v, receipts)
.await?; .await?;
cfg.insert(k.clone(), v); cfg.insert(k.clone(), v);
} }
Some(v) => match vs.update(ctx, db, manifest, config_overrides, v).await { Some(v) => match vs
.update(ctx, db, manifest, config_overrides, v, receipts)
.await
{
Err(ConfigurationError::NoMatch(e)) => { Err(ConfigurationError::NoMatch(e)) => {
Err(ConfigurationError::NoMatch(e.prepend(k.clone()))) Err(ConfigurationError::NoMatch(e.prepend(k.clone())))
} }
@@ -1113,18 +1168,95 @@ pub struct Pattern {
pub pattern_description: String, pub pattern_description: String,
} }
#[derive(Clone, Debug, Serialize, Deserialize)] #[derive(Clone, Debug, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct ValueSpecString { pub struct ValueSpecString {
#[serde(flatten)] #[serde(flatten)]
pub pattern: Option<Pattern>, pub pattern: Option<Pattern>,
#[serde(default)] pub textarea: bool,
pub copyable: bool, pub copyable: bool,
#[serde(default)]
pub masked: bool, pub masked: bool,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
#[serde(default)]
pub placeholder: Option<String>, pub placeholder: Option<String>,
} }
impl<'de> Deserialize<'de> for ValueSpecString {
fn deserialize<D: Deserializer<'de>>(deserializer: D) -> Result<ValueSpecString, D::Error> {
struct ValueSpecStringVisitor;
impl<'de> Visitor<'de> for ValueSpecStringVisitor {
type Value = ValueSpecString;
fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
formatter.write_str("struct ValueSpecString")
}
fn visit_map<V: MapAccess<'de>>(self, mut map: V) -> Result<ValueSpecString, V::Error> {
let mut pattern = None;
let mut pattern_description = None;
let mut textarea = false;
let mut copyable = false;
let mut masked = false;
let mut placeholder = None;
while let Some::<String>(key) = map.next_key()? {
if &key == "pattern" {
if pattern.is_some() {
return Err(serde::de::Error::duplicate_field("pattern"));
} else {
pattern = Some(
Regex::new(&map.next_value::<String>()?)
.map_err(serde::de::Error::custom)?,
);
}
} else if &key == "pattern-description" {
if pattern_description.is_some() {
return Err(serde::de::Error::duplicate_field("pattern-description"));
} else {
pattern_description = Some(map.next_value()?);
}
} else if &key == "textarea" {
textarea = map.next_value()?;
} else if &key == "copyable" {
copyable = map.next_value()?;
} else if &key == "masked" {
masked = map.next_value()?;
} else if &key == "placeholder" {
if placeholder.is_some() {
return Err(serde::de::Error::duplicate_field("placeholder"));
} else {
placeholder = Some(map.next_value()?);
}
}
}
let regex = match (pattern, pattern_description) {
(None, None) => None,
(Some(p), Some(d)) => Some(Pattern {
pattern: p,
pattern_description: d,
}),
(Some(_), None) => {
return Err(serde::de::Error::missing_field("pattern-description"));
}
(None, Some(_)) => {
return Err(serde::de::Error::missing_field("pattern"));
}
};
Ok(ValueSpecString {
pattern: regex,
textarea,
copyable,
masked,
placeholder,
})
}
}
const FIELDS: &'static [&'static str] = &[
"pattern",
"pattern-description",
"textarea",
"copyable",
"masked",
"placeholder",
];
deserializer.deserialize_struct("ValueSpecString", FIELDS, ValueSpecStringVisitor)
}
}
#[async_trait] #[async_trait]
impl ValueSpec for ValueSpecString { impl ValueSpec for ValueSpecString {
fn matches(&self, value: &Value) -> Result<(), NoMatchWithPath> { fn matches(&self, value: &Value) -> Result<(), NoMatchWithPath> {
@@ -1160,6 +1292,7 @@ impl ValueSpec for ValueSpecString {
_manifest: &Manifest, _manifest: &Manifest,
_config_overrides: &BTreeMap<PackageId, Config>, _config_overrides: &BTreeMap<PackageId, Config>,
_value: &mut Value, _value: &mut Value,
_receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
Ok(()) Ok(())
} }
@@ -1192,10 +1325,7 @@ impl DefaultableWith for ValueSpecString {
let candidate = spec.gen(rng); let candidate = spec.gen(rng);
match (spec, &self.pattern) { match (spec, &self.pattern) {
(DefaultString::Entropy(_), Some(pattern)) (DefaultString::Entropy(_), Some(pattern))
if !pattern.pattern.is_match(&candidate) => if !pattern.pattern.is_match(&candidate) => {}
{
()
}
_ => { _ => {
return Ok(Value::String(candidate)); return Ok(Value::String(candidate));
} }
@@ -1204,6 +1334,8 @@ impl DefaultableWith for ValueSpecString {
if &now.elapsed() > timeout { if &now.elapsed() > timeout {
return Err(TimeoutError); return Err(TimeoutError);
} }
} else {
return Ok(Value::String(candidate));
} }
} }
} else { } else {
@@ -1371,6 +1503,7 @@ impl ValueSpec for ValueSpecUnion {
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
if let Value::Object(o) = value { if let Value::Object(o) = value {
match o.get(&self.tag.id) { match o.get(&self.tag.id) {
@@ -1381,7 +1514,10 @@ impl ValueSpec for ValueSpecUnion {
None => Err(ConfigurationError::NoMatch(NoMatchWithPath::new( None => Err(ConfigurationError::NoMatch(NoMatchWithPath::new(
MatchError::Union(tag.clone(), self.variants.keys().cloned().collect()), MatchError::Union(tag.clone(), self.variants.keys().cloned().collect()),
))), ))),
Some(spec) => spec.update(ctx, db, manifest, config_overrides, o).await, Some(spec) => {
spec.update(ctx, db, manifest, config_overrides, o, receipts)
.await
}
}, },
Some(other) => Err(ConfigurationError::NoMatch( Some(other) => Err(ConfigurationError::NoMatch(
NoMatchWithPath::new(MatchError::InvalidType("string", other.type_of())) NoMatchWithPath::new(MatchError::InvalidType("string", other.type_of()))
@@ -1513,13 +1649,16 @@ impl ValueSpec for ValueSpecPointer {
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
match self { match self {
ValueSpecPointer::Package(a) => { ValueSpecPointer::Package(a) => {
a.update(ctx, db, manifest, config_overrides, value).await a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
} }
ValueSpecPointer::System(a) => { ValueSpecPointer::System(a) => {
a.update(ctx, db, manifest, config_overrides, value).await a.update(ctx, db, manifest, config_overrides, value, receipts)
.await
} }
} }
} }
@@ -1563,12 +1702,17 @@ impl PackagePointerSpec {
db: &mut Db, db: &mut Db,
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
receipts: &ConfigPointerReceipts,
) -> Result<Value, ConfigurationError> { ) -> Result<Value, ConfigurationError> {
match &self { match &self {
PackagePointerSpec::TorKey(key) => key.deref(&manifest.id, &ctx.secret_store).await, PackagePointerSpec::TorKey(key) => key.deref(&manifest.id, &ctx.secret_store).await,
PackagePointerSpec::TorAddress(tor) => tor.deref(db).await, PackagePointerSpec::TorAddress(tor) => {
PackagePointerSpec::LanAddress(lan) => lan.deref(db).await, tor.deref(db, &receipts.interface_addresses_receipt).await
PackagePointerSpec::Config(cfg) => cfg.deref(ctx, db, config_overrides).await, }
PackagePointerSpec::LanAddress(lan) => {
lan.deref(db, &receipts.interface_addresses_receipt).await
}
PackagePointerSpec::Config(cfg) => cfg.deref(ctx, db, config_overrides, receipts).await,
} }
} }
} }
@@ -1616,8 +1760,11 @@ impl ValueSpec for PackagePointerSpec {
manifest: &Manifest, manifest: &Manifest,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
*value = self.deref(ctx, db, manifest, config_overrides).await?; *value = self
.deref(ctx, db, manifest, config_overrides, receipts)
.await?;
Ok(()) Ok(())
} }
fn pointers(&self, _value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> { fn pointers(&self, _value: &Value) -> Result<BTreeSet<ValueSpecPointer>, NoMatchWithPath> {
@@ -1640,16 +1787,17 @@ pub struct TorAddressPointer {
interface: InterfaceId, interface: InterfaceId,
} }
impl TorAddressPointer { impl TorAddressPointer {
async fn deref<Db: DbHandle>(&self, db: &mut Db) -> Result<Value, ConfigurationError> { async fn deref<Db: DbHandle>(
let addr = crate::db::DatabaseModel::new() &self,
.package_data() db: &mut Db,
.idx_model(&self.package_id) receipt: &InterfaceAddressesReceipt,
.and_then(|pde| pde.installed()) ) -> Result<Value, ConfigurationError> {
.and_then(|installed| installed.interface_addresses().idx_model(&self.interface)) let addr = receipt
.and_then(|addresses| addresses.tor_address()) .interface_addresses
.get(db, true) .get(db, (&self.package_id, &self.interface))
.await .await
.map_err(|e| ConfigurationError::SystemError(Error::from(e)))?; .map_err(|e| ConfigurationError::SystemError(Error::from(e)))?
.and_then(|addresses| addresses.tor_address);
Ok(addr.to_owned().map(Value::String).unwrap_or(Value::Null)) Ok(addr.to_owned().map(Value::String).unwrap_or(Value::Null))
} }
} }
@@ -1664,6 +1812,39 @@ impl fmt::Display for TorAddressPointer {
} }
} }
pub struct InterfaceAddressesReceipt {
interface_addresses: LockReceipt<crate::db::model::InterfaceAddresses, (String, String)>,
}
impl InterfaceAddressesReceipt {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
// let cleanup_receipts = CleanupFailedReceipts::setup(locks);
let interface_addresses = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.interface_addresses().star())
.make_locker(LockType::Read)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
// cleanup_receipts: cleanup_receipts(skeleton_key)?,
interface_addresses: interface_addresses.verify(skeleton_key)?,
})
}
}
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct LanAddressPointer { pub struct LanAddressPointer {
@@ -1672,28 +1853,81 @@ pub struct LanAddressPointer {
} }
impl fmt::Display for LanAddressPointer { impl fmt::Display for LanAddressPointer {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self { let LanAddressPointer {
LanAddressPointer { package_id,
package_id, interface,
interface, } = self;
} => write!(f, "{}: lan-address: {}", package_id, interface), write!(f, "{}: lan-address: {}", package_id, interface)
}
} }
} }
impl LanAddressPointer { impl LanAddressPointer {
async fn deref<Db: DbHandle>(&self, db: &mut Db) -> Result<Value, ConfigurationError> { async fn deref<Db: DbHandle>(
let addr = crate::db::DatabaseModel::new() &self,
.package_data() db: &mut Db,
.idx_model(&self.package_id) receipts: &InterfaceAddressesReceipt,
.and_then(|pde| pde.installed()) ) -> Result<Value, ConfigurationError> {
.and_then(|installed| installed.interface_addresses().idx_model(&self.interface)) let addr = receipts
.and_then(|addresses| addresses.lan_address()) .interface_addresses
.get(db, true) .get(db, (&self.package_id, &self.interface))
.await .await
.map_err(|e| ConfigurationError::SystemError(Error::from(e)))?; .ok()
.flatten()
.and_then(|x| x.lan_address);
Ok(addr.to_owned().map(Value::String).unwrap_or(Value::Null)) Ok(addr.to_owned().map(Value::String).unwrap_or(Value::Null))
} }
} }
pub struct ConfigPointerReceipts {
interface_addresses_receipt: InterfaceAddressesReceipt,
manifest_volumes: LockReceipt<crate::volume::Volumes, String>,
manifest_version: LockReceipt<crate::util::Version, String>,
config_actions: LockReceipt<super::action::ConfigActions, String>,
}
impl ConfigPointerReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let interface_addresses_receipt = InterfaceAddressesReceipt::setup(locks);
let manifest_volumes = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest().volumes())
.make_locker(LockType::Read)
.add_to_keys(locks);
let manifest_version = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest().version())
.make_locker(LockType::Read)
.add_to_keys(locks);
let config_actions = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.and_then(|x| x.manifest().config())
.make_locker(LockType::Read)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
interface_addresses_receipt: interface_addresses_receipt(skeleton_key)?,
manifest_volumes: manifest_volumes.verify(skeleton_key)?,
config_actions: config_actions.verify(skeleton_key)?,
manifest_version: manifest_version.verify(skeleton_key)?,
})
}
}
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)] #[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct ConfigPointer { pub struct ConfigPointer {
@@ -1710,40 +1944,22 @@ impl ConfigPointer {
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db, db: &mut Db,
config_overrides: &BTreeMap<PackageId, Config>, config_overrides: &BTreeMap<PackageId, Config>,
receipts: &ConfigPointerReceipts,
) -> Result<Value, ConfigurationError> { ) -> Result<Value, ConfigurationError> {
if let Some(cfg) = config_overrides.get(&self.package_id) { if let Some(cfg) = config_overrides.get(&self.package_id) {
Ok(self.select(&Value::Object(cfg.clone()))) Ok(self.select(&Value::Object(cfg.clone())))
} else { } else {
let manifest_model: OptionModel<Manifest> = crate::db::DatabaseModel::new() let id = &self.package_id;
.package_data() let version = receipts.manifest_version.get(db, id).await.ok().flatten();
.idx_model(&self.package_id) let cfg_actions = receipts.config_actions.get(db, id).await.ok().flatten();
.and_then(|pde| pde.installed()) let volumes = receipts.manifest_volumes.get(db, id).await.ok().flatten();
.map(|installed| installed.manifest())
.into();
let version = manifest_model
.clone()
.map(|manifest| manifest.version())
.get(db, true)
.await
.map_err(|e| ConfigurationError::SystemError(Error::from(e)))?;
let cfg_actions = manifest_model
.clone()
.and_then(|manifest| manifest.config())
.get(db, true)
.await
.map_err(|e| ConfigurationError::SystemError(Error::from(e)))?;
let volumes = manifest_model
.map(|manifest| manifest.volumes())
.get(db, true)
.await
.map_err(|e| ConfigurationError::SystemError(Error::from(e)))?;
if let (Some(version), Some(cfg_actions), Some(volumes)) = if let (Some(version), Some(cfg_actions), Some(volumes)) =
(&*version, &*cfg_actions, &*volumes) (&version, &cfg_actions, &volumes)
{ {
let cfg_res = cfg_actions let cfg_res = cfg_actions
.get(&ctx, &self.package_id, version, volumes) .get(ctx, &self.package_id, version, volumes)
.await .await
.map_err(|e| ConfigurationError::SystemError(Error::from(e)))?; .map_err(|e| ConfigurationError::SystemError(e))?;
if let Some(cfg) = cfg_res.config { if let Some(cfg) = cfg_res.config {
Ok(self.select(&Value::Object(cfg))) Ok(self.select(&Value::Object(cfg)))
} else { } else {
@@ -1757,13 +1973,12 @@ impl ConfigPointer {
} }
impl fmt::Display for ConfigPointer { impl fmt::Display for ConfigPointer {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self { let ConfigPointer {
ConfigPointer { package_id,
package_id, selector,
selector, ..
.. } = self;
} => write!(f, "{}: config: {}", package_id, selector), write!(f, "{}: config: {}", package_id, selector)
}
} }
} }
@@ -1837,7 +2052,7 @@ impl TorKeyPointer {
async fn deref( async fn deref(
&self, &self,
source_package: &PackageId, source_package: &PackageId,
secrets: &SqlitePool, secrets: &PgPool,
) -> Result<Value, ConfigurationError> { ) -> Result<Value, ConfigurationError> {
if &self.package_id != source_package { if &self.package_id != source_package {
return Err(ConfigurationError::PermissionDenied( return Err(ConfigurationError::PermissionDenied(
@@ -1845,7 +2060,7 @@ impl TorKeyPointer {
)); ));
} }
let x = sqlx::query!( let x = sqlx::query!(
"SELECT key FROM tor WHERE package = ? AND interface = ?", "SELECT key FROM tor WHERE package = $1 AND interface = $2",
*self.package_id, *self.package_id,
*self.interface *self.interface
) )
@@ -1909,6 +2124,8 @@ impl ValueSpec for SystemPointerSpec {
_manifest: &Manifest, _manifest: &Manifest,
_config_overrides: &BTreeMap<PackageId, Config>, _config_overrides: &BTreeMap<PackageId, Config>,
value: &mut Value, value: &mut Value,
_receipts: &ConfigPointerReceipts,
) -> Result<(), ConfigurationError> { ) -> Result<(), ConfigurationError> {
*value = self.deref(db).await?; *value = self.deref(db).await?;
Ok(()) Ok(())
@@ -1926,3 +2143,42 @@ impl ValueSpec for SystemPointerSpec {
false false
} }
} }
#[test]
fn invalid_regex_produces_error() {
assert!(
serde_yaml::from_reader::<_, ConfigSpec>(std::io::Cursor::new(include_bytes!(
"../../test/config-spec/lnd-invalid-regex.yaml"
)))
.is_err()
)
}
#[test]
fn missing_pattern_description_produces_error() {
assert!(
serde_yaml::from_reader::<_, ConfigSpec>(std::io::Cursor::new(include_bytes!(
"../../test/config-spec/lnd-missing-pattern-description.yaml"
)))
.is_err()
)
}
#[test]
fn missing_pattern_produces_error() {
assert!(
serde_yaml::from_reader::<_, ConfigSpec>(std::io::Cursor::new(include_bytes!(
"../../test/config-spec/lnd-missing-pattern.yaml"
)))
.is_err()
)
}
#[test]
fn regex_control() {
let spec = serde_yaml::from_reader::<_, ConfigSpec>(std::io::Cursor::new(include_bytes!(
"../../test/config-spec/lnd-correct.yaml"
)))
.unwrap();
println!("{}", serde_json::to_string_pretty(&spec).unwrap());
}

View File

@@ -16,7 +16,7 @@ impl CharSet {
self.0.iter().any(|r| r.0.contains(c)) self.0.iter().any(|r| r.0.contains(c))
} }
pub fn gen<R: Rng>(&self, rng: &mut R) -> char { pub fn gen<R: Rng>(&self, rng: &mut R) -> char {
let mut idx = rng.gen_range(0, self.1); let mut idx = rng.gen_range(0..self.1);
for r in &self.0 { for r in &self.0 {
if idx < r.1 { if idx < r.1 {
return std::convert::TryFrom::try_from( return std::convert::TryFrom::try_from(
@@ -147,6 +147,44 @@ impl serde::ser::Serialize for CharSet {
} }
} }
pub trait MergeWith {
fn merge_with(&mut self, other: &serde_json::Value);
}
impl MergeWith for serde_json::Value {
fn merge_with(&mut self, other: &serde_json::Value) {
use serde_json::Value::Object;
if let (Object(orig), Object(ref other)) = (self, other) {
for (key, val) in other.into_iter() {
match (orig.get_mut(key), val) {
(Some(new_orig @ Object(_)), other @ Object(_)) => {
new_orig.merge_with(other);
}
(None, _) => {
orig.insert(key.clone(), val.clone());
}
_ => (),
}
}
}
}
}
#[test]
fn merge_with_tests() {
use serde_json::json;
let mut a = json!(
{"a": 1, "c": {"d": "123"}, "i": [1,2,3], "j": {}, "k":[1,2,3], "l": "test"}
);
a.merge_with(
&json!({"a":"a", "b": "b", "c":{"d":"d", "e":"e"}, "f":{"g":"g"}, "h": [1,2,3], "i":"i", "j":[1,2,3], "k":{}}),
);
assert_eq!(
a,
json!({"a": 1, "c": {"d": "123", "e":"e"}, "b":"b", "f": {"g":"g"}, "h":[1,2,3], "i":[1,2,3], "j": {}, "k":[1,2,3], "l": "test"})
)
}
pub mod serde_regex { pub mod serde_regex {
use regex::Regex; use regex::Regex;
use serde::*; use serde::*;

View File

@@ -15,6 +15,7 @@ use rpc_toolkit::Context;
use serde::Deserialize; use serde::Deserialize;
use tracing::instrument; use tracing::instrument;
use crate::util::config::{load_config_from_paths, local_config_path};
use crate::ResultExt; use crate::ResultExt;
#[derive(Debug, Default, Deserialize)] #[derive(Debug, Default, Deserialize)]
@@ -23,6 +24,7 @@ pub struct CliContextConfig {
pub bind_rpc: Option<SocketAddr>, pub bind_rpc: Option<SocketAddr>,
pub host: Option<Url>, pub host: Option<Url>,
#[serde(deserialize_with = "crate::util::serde::deserialize_from_str_opt")] #[serde(deserialize_with = "crate::util::serde::deserialize_from_str_opt")]
#[serde(default)]
pub proxy: Option<Url>, pub proxy: Option<Url>,
pub cookie_path: Option<PathBuf>, pub cookie_path: Option<PathBuf>,
} }
@@ -38,6 +40,10 @@ pub struct CliContextSeed {
impl Drop for CliContextSeed { impl Drop for CliContextSeed {
fn drop(&mut self) { fn drop(&mut self) {
let tmp = format!("{}.tmp", self.cookie_path.display()); let tmp = format!("{}.tmp", self.cookie_path.display());
let parent_dir = self.cookie_path.parent().unwrap_or(Path::new("/"));
if !parent_dir.exists() {
std::fs::create_dir_all(&parent_dir).unwrap();
}
let mut writer = fd_lock_rs::FdLock::lock( let mut writer = fd_lock_rs::FdLock::lock(
File::create(&tmp).unwrap(), File::create(&tmp).unwrap(),
fd_lock_rs::LockType::Exclusive, fd_lock_rs::LockType::Exclusive,
@@ -60,16 +66,16 @@ impl CliContext {
/// BLOCKING /// BLOCKING
#[instrument(skip(matches))] #[instrument(skip(matches))]
pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> { pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> {
let cfg_path = Path::new(matches.value_of("config").unwrap_or(crate::CONFIG_PATH)); let local_config_path = local_config_path();
let base = if cfg_path.exists() { let base: CliContextConfig = load_config_from_paths(
serde_yaml::from_reader( matches
File::open(cfg_path) .values_of("config")
.with_ctx(|_| (crate::ErrorKind::Filesystem, cfg_path.display().to_string()))?, .into_iter()
) .flatten()
.with_kind(crate::ErrorKind::Deserialization)? .map(|p| Path::new(p))
} else { .chain(local_config_path.as_deref().into_iter())
CliContextConfig::default() .chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
}; )?;
let mut url = if let Some(host) = matches.value_of("host") { let mut url = if let Some(host) = matches.value_of("host") {
host.parse()? host.parse()?
} else if let Some(host) = base.host { } else if let Some(host) = base.host {
@@ -88,7 +94,9 @@ impl CliContext {
}; };
let cookie_path = base.cookie_path.unwrap_or_else(|| { let cookie_path = base.cookie_path.unwrap_or_else(|| {
cfg_path local_config_path
.as_deref()
.unwrap_or_else(|| Path::new(crate::util::config::CONFIG_PATH))
.parent() .parent()
.unwrap_or(Path::new("/")) .unwrap_or(Path::new("/"))
.join(".cookies.json") .join(".cookies.json")
@@ -149,3 +157,13 @@ impl Context for CliContext {
&self.0.client &self.0.client
} }
} }
/// When we had an empty proxy the system wasn't working like it used to, which allowed empty proxy
#[test]
fn test_cli_proxy_empty() {
serde_yaml::from_str::<CliContextConfig>(
"
bind_rpc:
",
)
.unwrap();
}

View File

@@ -28,7 +28,7 @@ impl DiagnosticContextConfig {
let cfg_path = path let cfg_path = path
.as_ref() .as_ref()
.map(|p| p.as_ref()) .map(|p| p.as_ref())
.unwrap_or(Path::new(crate::CONFIG_PATH)); .unwrap_or(Path::new(crate::util::config::CONFIG_PATH));
if let Some(f) = File::maybe_open(cfg_path) if let Some(f) = File::maybe_open(cfg_path)
.await .await
.with_ctx(|_| (crate::ErrorKind::Filesystem, cfg_path.display().to_string()))? .with_ctx(|_| (crate::ErrorKind::Filesystem, cfg_path.display().to_string()))?

View File

@@ -7,24 +7,24 @@ use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use bollard::Docker; use bollard::Docker;
use color_eyre::eyre::eyre; use helpers::to_tmp_path;
use patch_db::json_ptr::JsonPointer; use patch_db::json_ptr::JsonPointer;
use patch_db::{DbHandle, LockType, PatchDb, Revision}; use patch_db::{DbHandle, LockReceipt, LockType, PatchDb, Revision};
use reqwest::Url; use reqwest::Url;
use rpc_toolkit::url::Host; use rpc_toolkit::url::Host;
use rpc_toolkit::Context; use rpc_toolkit::Context;
use serde::Deserialize; use serde::Deserialize;
use sqlx::sqlite::SqliteConnectOptions; use sqlx::postgres::PgConnectOptions;
use sqlx::SqlitePool; use sqlx::PgPool;
use tokio::fs::File; use tokio::fs::File;
use tokio::process::Command; use tokio::process::Command;
use tokio::sync::{broadcast, oneshot, Mutex, RwLock}; use tokio::sync::{broadcast, oneshot, Mutex, RwLock};
use tracing::instrument; use tracing::instrument;
use crate::core::rpc_continuations::{RequestGuid, RpcContinuation}; use crate::core::rpc_continuations::{RequestGuid, RestHandler, RpcContinuation};
use crate::db::model::{Database, InstalledPackageDataEntry, PackageDataEntry}; use crate::db::model::{Database, InstalledPackageDataEntry, PackageDataEntry};
use crate::hostname::{derive_hostname, derive_id, get_product_key}; use crate::init::{init_postgres, pgloader};
use crate::install::cleanup::{cleanup_failed, uninstall}; use crate::install::cleanup::{cleanup_failed, uninstall, CleanupFailedReceipts};
use crate::manager::ManagerMap; use crate::manager::ManagerMap;
use crate::middleware::auth::HashSessionToken; use crate::middleware::auth::HashSessionToken;
use crate::net::tor::os_key; use crate::net::tor::os_key;
@@ -36,16 +36,19 @@ use crate::shutdown::Shutdown;
use crate::status::{MainStatus, Status}; use crate::status::{MainStatus, Status};
use crate::util::io::from_yaml_async_reader; use crate::util::io::from_yaml_async_reader;
use crate::util::{AsyncFileExt, Invoke}; use crate::util::{AsyncFileExt, Invoke};
use crate::{Error, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
#[derive(Debug, Default, Deserialize)] #[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct RpcContextConfig { pub struct RpcContextConfig {
pub migration_batch_rows: Option<usize>,
pub migration_prefetch_rows: Option<usize>,
pub bind_rpc: Option<SocketAddr>, pub bind_rpc: Option<SocketAddr>,
pub bind_ws: Option<SocketAddr>, pub bind_ws: Option<SocketAddr>,
pub bind_static: Option<SocketAddr>, pub bind_static: Option<SocketAddr>,
pub tor_control: Option<SocketAddr>, pub tor_control: Option<SocketAddr>,
pub tor_socks: Option<SocketAddr>, pub tor_socks: Option<SocketAddr>,
pub dns_bind: Option<Vec<SocketAddr>>,
pub revision_cache_size: Option<usize>, pub revision_cache_size: Option<usize>,
pub datadir: Option<PathBuf>, pub datadir: Option<PathBuf>,
pub log_server: Option<Url>, pub log_server: Option<Url>,
@@ -55,7 +58,7 @@ impl RpcContextConfig {
let cfg_path = path let cfg_path = path
.as_ref() .as_ref()
.map(|p| p.as_ref()) .map(|p| p.as_ref())
.unwrap_or(Path::new(crate::CONFIG_PATH)); .unwrap_or(Path::new(crate::util::config::CONFIG_PATH));
if let Some(f) = File::maybe_open(cfg_path) if let Some(f) = File::maybe_open(cfg_path)
.await .await
.with_ctx(|_| (crate::ErrorKind::Filesystem, cfg_path.display().to_string()))? .with_ctx(|_| (crate::ErrorKind::Filesystem, cfg_path.display().to_string()))?
@@ -70,41 +73,42 @@ impl RpcContextConfig {
.as_deref() .as_deref()
.unwrap_or_else(|| Path::new("/embassy-data")) .unwrap_or_else(|| Path::new("/embassy-data"))
} }
pub async fn db(&self, secret_store: &SqlitePool, product_key: &str) -> Result<PatchDb, Error> { pub async fn db(&self, secret_store: &PgPool) -> Result<PatchDb, Error> {
let sid = derive_id(product_key);
let hostname = derive_hostname(&sid);
let db_path = self.datadir().join("main").join("embassy.db"); let db_path = self.datadir().join("main").join("embassy.db");
let db = PatchDb::open(&db_path) let db = PatchDb::open(&db_path)
.await .await
.with_ctx(|_| (crate::ErrorKind::Filesystem, db_path.display().to_string()))?; .with_ctx(|_| (crate::ErrorKind::Filesystem, db_path.display().to_string()))?;
if !db.exists(&<JsonPointer>::default()).await? { if !db.exists(&<JsonPointer>::default()).await {
db.put( db.put(
&<JsonPointer>::default(), &<JsonPointer>::default(),
&Database::init( &Database::init(
sid,
&hostname,
&os_key(&mut secret_store.acquire().await?).await?, &os_key(&mut secret_store.acquire().await?).await?,
password_hash(&mut secret_store.acquire().await?).await?, password_hash(&mut secret_store.acquire().await?).await?,
), ),
None,
) )
.await?; .await?;
} }
Ok(db) Ok(db)
} }
#[instrument] #[instrument]
pub async fn secret_store(&self) -> Result<SqlitePool, Error> { pub async fn secret_store(&self) -> Result<PgPool, Error> {
let secret_store = SqlitePool::connect_with( init_postgres(self.datadir()).await?;
SqliteConnectOptions::new() let secret_store =
.filename(self.datadir().join("main").join("secrets.db")) PgPool::connect_with(PgConnectOptions::new().database("secrets").username("root"))
.create_if_missing(true) .await?;
.busy_timeout(Duration::from_secs(30)),
)
.await?;
sqlx::migrate!() sqlx::migrate!()
.run(&secret_store) .run(&secret_store)
.await .await
.with_kind(crate::ErrorKind::Database)?; .with_kind(crate::ErrorKind::Database)?;
let old_db_path = self.datadir().join("main/secrets.db");
if tokio::fs::metadata(&old_db_path).await.is_ok() {
pgloader(
&old_db_path,
self.migration_batch_rows.unwrap_or(25000),
self.migration_prefetch_rows.unwrap_or(100_000),
)
.await?;
}
Ok(secret_store) Ok(secret_store)
} }
} }
@@ -117,7 +121,7 @@ pub struct RpcContextSeed {
pub datadir: PathBuf, pub datadir: PathBuf,
pub disk_guid: Arc<String>, pub disk_guid: Arc<String>,
pub db: PatchDb, pub db: PatchDb,
pub secret_store: SqlitePool, pub secret_store: PgPool,
pub docker: Docker, pub docker: Docker,
pub net_controller: NetController, pub net_controller: NetController,
pub managers: ManagerMap, pub managers: ManagerMap,
@@ -132,6 +136,71 @@ pub struct RpcContextSeed {
pub wifi_manager: Arc<RwLock<WpaCli>>, pub wifi_manager: Arc<RwLock<WpaCli>>,
} }
pub struct RpcCleanReceipts {
cleanup_receipts: CleanupFailedReceipts,
packages: LockReceipt<crate::db::model::AllPackageData, ()>,
package: LockReceipt<crate::db::model::PackageDataEntry, String>,
}
impl RpcCleanReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let cleanup_receipts = CleanupFailedReceipts::setup(locks);
let packages = crate::db::DatabaseModel::new()
.package_data()
.make_locker(LockType::Write)
.add_to_keys(locks);
let package = crate::db::DatabaseModel::new()
.package_data()
.star()
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
cleanup_receipts: cleanup_receipts(skeleton_key)?,
packages: packages.verify(skeleton_key)?,
package: package.verify(skeleton_key)?,
})
}
}
}
pub struct RpcSetNginxReceipts {
server_info: LockReceipt<crate::db::model::ServerInfo, ()>,
}
impl RpcSetNginxReceipts {
pub async fn new(db: &'_ mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let server_info = crate::db::DatabaseModel::new()
.server_info()
.make_locker(LockType::Read)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
server_info: server_info.verify(skeleton_key)?,
})
}
}
}
#[derive(Clone)] #[derive(Clone)]
pub struct RpcContext(Arc<RpcContextSeed>); pub struct RpcContext(Arc<RpcContextSeed>);
impl RpcContext { impl RpcContext {
@@ -148,18 +217,24 @@ impl RpcContext {
))); )));
let (shutdown, _) = tokio::sync::broadcast::channel(1); let (shutdown, _) = tokio::sync::broadcast::channel(1);
let secret_store = base.secret_store().await?; let secret_store = base.secret_store().await?;
tracing::info!("Opened Sqlite DB"); tracing::info!("Opened Pg DB");
let db = base.db(&secret_store, &get_product_key().await?).await?; let db = base.db(&secret_store).await?;
tracing::info!("Opened PatchDB"); tracing::info!("Opened PatchDB");
let docker = Docker::connect_with_unix_defaults()?; let mut docker = Docker::connect_with_unix_defaults()?;
docker.set_timeout(Duration::from_secs(600));
tracing::info!("Connected to Docker"); tracing::info!("Connected to Docker");
let net_controller = NetController::init( let net_controller = NetController::init(
([127, 0, 0, 1], 80).into(), ([127, 0, 0, 1], 80).into(),
crate::net::tor::os_key(&mut secret_store.acquire().await?).await?, crate::net::tor::os_key(&mut secret_store.acquire().await?).await?,
base.tor_control base.tor_control
.unwrap_or(SocketAddr::from(([127, 0, 0, 1], 9051))), .unwrap_or(SocketAddr::from(([127, 0, 0, 1], 9051))),
base.dns_bind
.as_ref()
.map(|v| v.as_slice())
.unwrap_or(&[SocketAddr::from(([127, 0, 0, 1], 53))]),
secret_store.clone(), secret_store.clone(),
None, None,
&mut db.handle(),
) )
.await?; .await?;
tracing::info!("Initialized Net Controller"); tracing::info!("Initialized Net Controller");
@@ -203,13 +278,15 @@ impl RpcContext {
tracing::info!("Initialized Package Managers"); tracing::info!("Initialized Package Managers");
Ok(res) Ok(res)
} }
#[instrument(skip(self, db))]
pub async fn set_nginx_conf<Db: DbHandle>(&self, db: &mut Db) -> Result<(), Error> { #[instrument(skip(self, db, receipts))]
pub async fn set_nginx_conf<Db: DbHandle>(
&self,
db: &mut Db,
receipts: RpcSetNginxReceipts,
) -> Result<(), Error> {
tokio::fs::write("/etc/nginx/sites-available/default", { tokio::fs::write("/etc/nginx/sites-available/default", {
let info = crate::db::DatabaseModel::new() let info = receipts.server_info.get(db).await?;
.server_info()
.get(db, true)
.await?;
format!( format!(
include_str!("../nginx/main-ui.conf.template"), include_str!("../nginx/main-ui.conf.template"),
lan_hostname = info.lan_address.host_str().unwrap(), lan_hostname = info.lan_address.host_str().unwrap(),
@@ -237,34 +314,19 @@ impl RpcContext {
self.is_closed.store(true, Ordering::SeqCst); self.is_closed.store(true, Ordering::SeqCst);
Ok(()) Ok(())
} }
#[instrument(skip(self))] #[instrument(skip(self))]
pub async fn cleanup(&self) -> Result<(), Error> { pub async fn cleanup(&self) -> Result<(), Error> {
let mut db = self.db.handle(); let mut db = self.db.handle();
crate::db::DatabaseModel::new() let receipts = RpcCleanReceipts::new(&mut db).await?;
.package_data() for (package_id, package) in receipts.packages.get(&mut db).await?.0 {
.lock(&mut db, LockType::Write)
.await?;
for package_id in crate::db::DatabaseModel::new()
.package_data()
.keys(&mut db, true)
.await?
{
if let Err(e) = async { if let Err(e) = async {
let mut pde = crate::db::DatabaseModel::new() match package {
.package_data()
.idx_model(&package_id)
.get_mut(&mut db)
.await?;
match pde.as_mut().ok_or_else(|| {
Error::new(
eyre!("Node does not exist: /package-data/{}", package_id),
crate::ErrorKind::Database,
)
})? {
PackageDataEntry::Installing { .. } PackageDataEntry::Installing { .. }
| PackageDataEntry::Restoring { .. } | PackageDataEntry::Restoring { .. }
| PackageDataEntry::Updating { .. } => { | PackageDataEntry::Updating { .. } => {
cleanup_failed(self, &mut db, &package_id).await?; cleanup_failed(self, &mut db, &package_id, &receipts.cleanup_receipts)
.await?;
} }
PackageDataEntry::Removing { .. } => { PackageDataEntry::Removing { .. } => {
uninstall( uninstall(
@@ -276,30 +338,48 @@ impl RpcContext {
.await?; .await?;
} }
PackageDataEntry::Installed { PackageDataEntry::Installed {
installed: installed,
InstalledPackageDataEntry { static_files,
status: Status { main, .. }, manifest,
..
},
..
} => { } => {
let new_main = match std::mem::replace( for (volume_id, volume_info) in &*manifest.volumes {
main, let tmp_path = to_tmp_path(volume_info.path_for(
MainStatus::Stopped, /* placeholder */ &self.datadir,
) { &package_id,
&manifest.version,
&volume_id,
))
.with_kind(ErrorKind::Filesystem)?;
if tokio::fs::metadata(&tmp_path).await.is_ok() {
tokio::fs::remove_dir_all(&tmp_path).await?;
}
}
let status = installed.status;
let main = match status.main {
MainStatus::BackingUp { started, .. } => { MainStatus::BackingUp { started, .. } => {
if let Some(_) = started { if let Some(_) = started {
MainStatus::Starting MainStatus::Starting { restarting: false }
} else { } else {
MainStatus::Stopped MainStatus::Stopped
} }
} }
MainStatus::Running { .. } => MainStatus::Starting, MainStatus::Running { .. } => {
a => a, MainStatus::Starting { restarting: false }
}
a => a.clone(),
}; };
*main = new_main; let new_package = PackageDataEntry::Installed {
installed: InstalledPackageDataEntry {
pde.save(&mut db).await?; status: Status { main, ..status },
..installed
},
static_files,
manifest,
};
receipts
.package
.set(&mut db, new_package, &package_id)
.await?;
} }
} }
Ok::<_, Error>(()) Ok::<_, Error>(())
@@ -312,6 +392,58 @@ impl RpcContext {
} }
Ok(()) Ok(())
} }
#[instrument(skip(self))]
pub async fn clean_continuations(&self) {
let mut continuations = self.rpc_stream_continuations.lock().await;
let mut to_remove = Vec::new();
for (guid, cont) in &*continuations {
if cont.is_timed_out() {
to_remove.push(guid.clone());
}
}
for guid in to_remove {
continuations.remove(&guid);
}
}
#[instrument(skip(self, handler))]
pub async fn add_continuation(&self, guid: RequestGuid, handler: RpcContinuation) {
self.clean_continuations().await;
self.rpc_stream_continuations
.lock()
.await
.insert(guid, handler);
}
pub async fn get_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
let mut continuations = self.rpc_stream_continuations.lock().await;
if let Some(cont) = continuations.remove(guid) {
cont.into_handler().await
} else {
None
}
}
pub async fn get_ws_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
let continuations = self.rpc_stream_continuations.lock().await;
if matches!(continuations.get(guid), Some(RpcContinuation::WebSocket(_))) {
drop(continuations);
self.get_continuation_handler(guid).await
} else {
None
}
}
pub async fn get_rest_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
let continuations = self.rpc_stream_continuations.lock().await;
if matches!(continuations.get(guid), Some(RpcContinuation::Rest(_))) {
drop(continuations);
self.get_continuation_handler(guid).await
} else {
None
}
}
} }
impl Context for RpcContext { impl Context for RpcContext {
fn host(&self) -> Host<&str> { fn host(&self) -> Host<&str> {

View File

@@ -1,5 +1,3 @@
use std::fs::File;
use std::io::Read;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
@@ -9,6 +7,7 @@ use rpc_toolkit::Context;
use serde::Deserialize; use serde::Deserialize;
use tracing::instrument; use tracing::instrument;
use crate::util::config::{load_config_from_paths, local_config_path};
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
#[derive(Debug, Default, Deserialize)] #[derive(Debug, Default, Deserialize)]
@@ -28,22 +27,24 @@ impl SdkContext {
/// BLOCKING /// BLOCKING
#[instrument(skip(matches))] #[instrument(skip(matches))]
pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> { pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> {
let cfg_path = Path::new(matches.value_of("config").unwrap_or(crate::CONFIG_PATH)); let local_config_path = local_config_path();
let base = if cfg_path.exists() { let base: SdkContextConfig = load_config_from_paths(
serde_yaml::from_reader( matches
File::open(cfg_path) .values_of("config")
.with_ctx(|_| (crate::ErrorKind::Filesystem, cfg_path.display().to_string()))?, .into_iter()
) .flatten()
.with_kind(crate::ErrorKind::Deserialization)? .map(|p| Path::new(p))
} else { .chain(local_config_path.as_deref().into_iter())
SdkContextConfig::default() .chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
}; )?;
Ok(SdkContext(Arc::new(SdkContextSeed { Ok(SdkContext(Arc::new(SdkContextSeed {
developer_key_path: base.developer_key_path.unwrap_or_else(|| { developer_key_path: base.developer_key_path.unwrap_or_else(|| {
cfg_path local_config_path
.as_deref()
.unwrap_or_else(|| Path::new(crate::util::config::CONFIG_PATH))
.parent() .parent()
.unwrap_or(Path::new("/")) .unwrap_or(Path::new("/"))
.join(".developer_key") .join("developer.key.pem")
}), }),
}))) })))
} }
@@ -53,9 +54,17 @@ impl SdkContext {
if !self.developer_key_path.exists() { if !self.developer_key_path.exists() {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `embassy-sdk init` before running this command."), crate::ErrorKind::Uninitialized)); return Err(Error::new(eyre!("Developer Key does not exist! Please run `embassy-sdk init` before running this command."), crate::ErrorKind::Uninitialized));
} }
let mut keypair_buf = [0; ed25519_dalek::KEYPAIR_LENGTH]; let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
File::open(&self.developer_key_path)?.read_exact(&mut keypair_buf)?; &std::fs::read_to_string(&self.developer_key_path)?,
Ok(ed25519_dalek::Keypair::from_bytes(&keypair_buf)?) )
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::from_bytes(&pair.secret_key[..])?;
let public = if let Some(public) = pair.public_key {
ed25519_dalek::PublicKey::from_bytes(&public[..])?
} else {
(&secret).into()
};
Ok(ed25519_dalek::Keypair { secret, public })
} }
} }
impl std::ops::Deref for SdkContext { impl std::ops::Deref for SdkContext {

View File

@@ -2,15 +2,15 @@ use std::net::{IpAddr, SocketAddr};
use std::ops::Deref; use std::ops::Deref;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use josekit::jwk::Jwk;
use patch_db::json_ptr::JsonPointer; use patch_db::json_ptr::JsonPointer;
use patch_db::PatchDb; use patch_db::PatchDb;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::Context; use rpc_toolkit::Context;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sqlx::sqlite::SqliteConnectOptions; use sqlx::postgres::PgConnectOptions;
use sqlx::SqlitePool; use sqlx::PgPool;
use tokio::fs::File; use tokio::fs::File;
use tokio::sync::broadcast::Sender; use tokio::sync::broadcast::Sender;
use tokio::sync::RwLock; use tokio::sync::RwLock;
@@ -18,7 +18,7 @@ use tracing::instrument;
use url::Host; use url::Host;
use crate::db::model::Database; use crate::db::model::Database;
use crate::hostname::{derive_hostname, derive_id, get_product_key}; use crate::init::{init_postgres, pgloader};
use crate::net::tor::os_key; use crate::net::tor::os_key;
use crate::setup::{password_hash, RecoveryStatus}; use crate::setup::{password_hash, RecoveryStatus};
use crate::util::io::from_yaml_async_reader; use crate::util::io::from_yaml_async_reader;
@@ -36,6 +36,8 @@ pub struct SetupResult {
#[derive(Debug, Default, Deserialize)] #[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct SetupContextConfig { pub struct SetupContextConfig {
pub migration_batch_rows: Option<usize>,
pub migration_prefetch_rows: Option<usize>,
pub bind_rpc: Option<SocketAddr>, pub bind_rpc: Option<SocketAddr>,
pub datadir: Option<PathBuf>, pub datadir: Option<PathBuf>,
} }
@@ -45,7 +47,7 @@ impl SetupContextConfig {
let cfg_path = path let cfg_path = path
.as_ref() .as_ref()
.map(|p| p.as_ref()) .map(|p| p.as_ref())
.unwrap_or(Path::new(crate::CONFIG_PATH)); .unwrap_or(Path::new(crate::util::config::CONFIG_PATH));
if let Some(f) = File::maybe_open(cfg_path) if let Some(f) = File::maybe_open(cfg_path)
.await .await
.with_ctx(|_| (crate::ErrorKind::Filesystem, cfg_path.display().to_string()))? .with_ctx(|_| (crate::ErrorKind::Filesystem, cfg_path.display().to_string()))?
@@ -64,15 +66,26 @@ impl SetupContextConfig {
pub struct SetupContextSeed { pub struct SetupContextSeed {
pub config_path: Option<PathBuf>, pub config_path: Option<PathBuf>,
pub migration_batch_rows: usize,
pub migration_prefetch_rows: usize,
pub bind_rpc: SocketAddr, pub bind_rpc: SocketAddr,
pub shutdown: Sender<()>, pub shutdown: Sender<()>,
pub datadir: PathBuf, pub datadir: PathBuf,
/// Used to encrypt for hidding from snoopers for setups create password
/// Set via path
pub current_secret: Arc<Jwk>,
pub selected_v2_drive: RwLock<Option<PathBuf>>, pub selected_v2_drive: RwLock<Option<PathBuf>>,
pub cached_product_key: RwLock<Option<Arc<String>>>, pub cached_product_key: RwLock<Option<Arc<String>>>,
pub recovery_status: RwLock<Option<Result<RecoveryStatus, RpcError>>>, pub recovery_status: RwLock<Option<Result<RecoveryStatus, RpcError>>>,
pub setup_result: RwLock<Option<(Arc<String>, SetupResult)>>, pub setup_result: RwLock<Option<(Arc<String>, SetupResult)>>,
} }
impl AsRef<Jwk> for SetupContextSeed {
fn as_ref(&self) -> &Jwk {
&self.current_secret
}
}
#[derive(Clone)] #[derive(Clone)]
pub struct SetupContext(Arc<SetupContextSeed>); pub struct SetupContext(Arc<SetupContextSeed>);
impl SetupContext { impl SetupContext {
@@ -83,9 +96,21 @@ impl SetupContext {
let datadir = cfg.datadir().to_owned(); let datadir = cfg.datadir().to_owned();
Ok(Self(Arc::new(SetupContextSeed { Ok(Self(Arc::new(SetupContextSeed {
config_path: path.as_ref().map(|p| p.as_ref().to_owned()), config_path: path.as_ref().map(|p| p.as_ref().to_owned()),
migration_batch_rows: cfg.migration_batch_rows.unwrap_or(25000),
migration_prefetch_rows: cfg.migration_prefetch_rows.unwrap_or(100_000),
bind_rpc: cfg.bind_rpc.unwrap_or(([127, 0, 0, 1], 5959).into()), bind_rpc: cfg.bind_rpc.unwrap_or(([127, 0, 0, 1], 5959).into()),
shutdown, shutdown,
datadir, datadir,
current_secret: Arc::new(
Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| {
tracing::debug!("{:?}", e);
tracing::error!("Couldn't generate ec key");
Error::new(
color_eyre::eyre::eyre!("Couldn't generate ec key"),
crate::ErrorKind::Unknown,
)
})?,
),
selected_v2_drive: RwLock::new(None), selected_v2_drive: RwLock::new(None),
cached_product_key: RwLock::new(None), cached_product_key: RwLock::new(None),
recovery_status: RwLock::new(None), recovery_status: RwLock::new(None),
@@ -93,61 +118,44 @@ impl SetupContext {
}))) })))
} }
#[instrument(skip(self))] #[instrument(skip(self))]
pub async fn db(&self, secret_store: &SqlitePool) -> Result<PatchDb, Error> { pub async fn db(&self, secret_store: &PgPool) -> Result<PatchDb, Error> {
let db_path = self.datadir.join("main").join("embassy.db"); let db_path = self.datadir.join("main").join("embassy.db");
let db = PatchDb::open(&db_path) let db = PatchDb::open(&db_path)
.await .await
.with_ctx(|_| (crate::ErrorKind::Filesystem, db_path.display().to_string()))?; .with_ctx(|_| (crate::ErrorKind::Filesystem, db_path.display().to_string()))?;
if !db.exists(&<JsonPointer>::default()).await? { if !db.exists(&<JsonPointer>::default()).await {
let pkey = self.product_key().await?;
let sid = derive_id(&*pkey);
let hostname = derive_hostname(&sid);
db.put( db.put(
&<JsonPointer>::default(), &<JsonPointer>::default(),
&Database::init( &Database::init(
sid,
&hostname,
&os_key(&mut secret_store.acquire().await?).await?, &os_key(&mut secret_store.acquire().await?).await?,
password_hash(&mut secret_store.acquire().await?).await?, password_hash(&mut secret_store.acquire().await?).await?,
), ),
None,
) )
.await?; .await?;
} }
Ok(db) Ok(db)
} }
#[instrument(skip(self))] #[instrument(skip(self))]
pub async fn secret_store(&self) -> Result<SqlitePool, Error> { pub async fn secret_store(&self) -> Result<PgPool, Error> {
let secret_store = SqlitePool::connect_with( init_postgres(&self.datadir).await?;
SqliteConnectOptions::new() let secret_store =
.filename(self.datadir.join("main").join("secrets.db")) PgPool::connect_with(PgConnectOptions::new().database("secrets").username("root"))
.create_if_missing(true) .await?;
.busy_timeout(Duration::from_secs(30)),
)
.await?;
sqlx::migrate!() sqlx::migrate!()
.run(&secret_store) .run(&secret_store)
.await .await
.with_kind(crate::ErrorKind::Database)?; .with_kind(crate::ErrorKind::Database)?;
let old_db_path = self.datadir.join("main/secrets.db");
if tokio::fs::metadata(&old_db_path).await.is_ok() {
pgloader(
&old_db_path,
self.migration_batch_rows,
self.migration_prefetch_rows,
)
.await?;
}
Ok(secret_store) Ok(secret_store)
} }
#[instrument(skip(self))]
pub async fn product_key(&self) -> Result<Arc<String>, Error> {
Ok(
if let Some(k) = {
let guard = self.cached_product_key.read().await;
let res = guard.clone();
drop(guard);
res
} {
k
} else {
let k = Arc::new(get_product_key().await?);
*self.cached_product_key.write().await = Some(k.clone());
k
},
)
}
} }
impl Context for SetupContext { impl Context for SetupContext {

View File

@@ -1,61 +1,80 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use patch_db::{DbHandle, LockType}; use patch_db::{DbHandle, LockReceipt, LockType};
use rpc_toolkit::command; use rpc_toolkit::command;
use tracing::instrument; use tracing::instrument;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::db::util::WithRevision;
use crate::dependencies::{ use crate::dependencies::{
break_all_dependents_transitive, heal_all_dependents_transitive, BreakageRes, DependencyError, break_all_dependents_transitive, heal_all_dependents_transitive, BreakageRes, DependencyError,
TaggedDependencyError, DependencyReceipt, TaggedDependencyError,
}; };
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::status::MainStatus; use crate::status::MainStatus;
use crate::util::display_none; use crate::util::display_none;
use crate::util::serde::display_serializable; use crate::util::serde::display_serializable;
use crate::{Error, ResultExt}; use crate::Error;
#[command(display(display_none))] #[derive(Clone)]
pub struct StartReceipts {
dependency_receipt: DependencyReceipt,
status: LockReceipt<MainStatus, ()>,
version: LockReceipt<crate::util::Version, ()>,
}
impl StartReceipts {
pub async fn new(db: &mut impl DbHandle, id: &PackageId) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, id);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
id: &PackageId,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let dependency_receipt = DependencyReceipt::setup(locks);
let status = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.status().main())
.make_locker(LockType::Write)
.add_to_keys(locks);
let version = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.manifest().version())
.make_locker(LockType::Read)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
dependency_receipt: dependency_receipt(skeleton_key)?,
status: status.verify(skeleton_key)?,
version: version.verify(skeleton_key)?,
})
}
}
}
#[command(display(display_none), metadata(sync_db = true))]
#[instrument(skip(ctx))] #[instrument(skip(ctx))]
pub async fn start( pub async fn start(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<(), Error> {
#[context] ctx: RpcContext,
#[arg] id: PackageId,
) -> Result<WithRevision<()>, Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
crate::db::DatabaseModel::new() let receipts = StartReceipts::new(&mut tx, &id).await?;
.package_data() let version = receipts.version.get(&mut tx).await?;
.lock(&mut tx, LockType::Write) receipts
.status
.set(&mut tx, MainStatus::Starting { restarting: false })
.await?; .await?;
let installed = crate::db::DatabaseModel::new() heal_all_dependents_transitive(&ctx, &mut tx, &id, &receipts.dependency_receipt).await?;
.package_data()
.idx_model(&id)
.and_then(|pkg| pkg.installed())
.expect(&mut tx)
.await
.with_ctx(|_| {
(
crate::ErrorKind::NotFound,
format!("{} is not installed", id),
)
})?;
installed.lock(&mut tx, LockType::Read).await?;
let version = installed
.clone()
.manifest()
.version()
.get(&mut tx, true)
.await?
.to_owned();
let mut status = installed.status().main().get_mut(&mut tx).await?;
*status = MainStatus::Starting; tx.commit().await?;
status.save(&mut tx).await?; drop(receipts);
heal_all_dependents_transitive(&ctx, &mut tx, &id).await?;
let revision = tx.commit(None).await?;
ctx.managers ctx.managers
.get(&(id, version)) .get(&(id, version))
@@ -64,10 +83,41 @@ pub async fn start(
.synchronize() .synchronize()
.await; .await;
Ok(WithRevision { Ok(())
revision, }
response: (), #[derive(Clone)]
}) pub struct StopReceipts {
breaks: crate::dependencies::BreakTransitiveReceipts,
status: LockReceipt<MainStatus, ()>,
}
impl StopReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle, id: &PackageId) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, id);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
id: &PackageId,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let breaks = crate::dependencies::BreakTransitiveReceipts::setup(locks);
let status = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.status().main())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
breaks: breaks(skeleton_key)?,
status: status.verify(skeleton_key)?,
})
}
}
} }
#[instrument(skip(db))] #[instrument(skip(db))]
@@ -77,32 +127,27 @@ async fn stop_common<Db: DbHandle>(
breakages: &mut BTreeMap<PackageId, TaggedDependencyError>, breakages: &mut BTreeMap<PackageId, TaggedDependencyError>,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let mut status = crate::db::DatabaseModel::new() let receipts = StopReceipts::new(&mut tx, id).await?;
.package_data() receipts.status.set(&mut tx, MainStatus::Stopping).await?;
.idx_model(&id)
.and_then(|pkg| pkg.installed())
.expect(&mut tx)
.await
.with_ctx(|_| {
(
crate::ErrorKind::NotFound,
format!("{} is not installed", id),
)
})?
.status()
.main()
.get_mut(&mut tx)
.await?;
*status = MainStatus::Stopping;
status.save(&mut tx).await?;
tx.save().await?; tx.save().await?;
break_all_dependents_transitive(db, &id, DependencyError::NotRunning, breakages).await?; break_all_dependents_transitive(
db,
id,
DependencyError::NotRunning,
breakages,
&receipts.breaks,
)
.await?;
Ok(()) Ok(())
} }
#[command(subcommands(self(stop_impl(async)), stop_dry), display(display_none))] #[command(
subcommands(self(stop_impl(async)), stop_dry),
display(display_none),
metadata(sync_db = true)
)]
pub fn stop(#[arg] id: PackageId) -> Result<PackageId, Error> { pub fn stop(#[arg] id: PackageId) -> Result<PackageId, Error> {
Ok(id) Ok(id)
} }
@@ -125,14 +170,38 @@ pub async fn stop_dry(
} }
#[instrument(skip(ctx))] #[instrument(skip(ctx))]
pub async fn stop_impl(ctx: RpcContext, id: PackageId) -> Result<WithRevision<()>, Error> { pub async fn stop_impl(ctx: RpcContext, id: PackageId) -> Result<(), Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
stop_common(&mut tx, &id, &mut BTreeMap::new()).await?; stop_common(&mut tx, &id, &mut BTreeMap::new()).await?;
Ok(WithRevision { tx.commit().await?;
revision: tx.commit(None).await?,
response: (), Ok(())
}) }
#[command(display(display_none), metadata(sync_db = true))]
pub async fn restart(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<(), Error> {
let mut db = ctx.db.handle();
let mut tx = db.begin().await?;
let mut status = crate::db::DatabaseModel::new()
.package_data()
.idx_model(&id)
.and_then(|pde| pde.installed())
.map(|i| i.status().main())
.get_mut(&mut tx)
.await?;
if !matches!(&*status, Some(MainStatus::Running { .. })) {
return Err(Error::new(
eyre!("{} is not running", id),
crate::ErrorKind::InvalidRequest,
));
}
*status = Some(MainStatus::Restarting);
status.save(&mut tx).await?;
tx.commit().await?;
Ok(())
} }

View File

@@ -1,20 +1,27 @@
use std::time::Instant; use std::sync::Arc;
use std::time::Duration;
use futures::future::BoxFuture; use futures::future::BoxFuture;
use http::{Request, Response}; use futures::FutureExt;
use hyper::Body; use helpers::TimedResource;
use hyper::upgrade::Upgraded;
use hyper::{Body, Error as HyperError, Request, Response};
use rand::RngCore; use rand::RngCore;
use tokio::task::JoinError;
use tokio_tungstenite::WebSocketStream;
use crate::{Error, ResultExt};
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, serde::Serialize, serde::Deserialize)] #[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, serde::Serialize, serde::Deserialize)]
pub struct RequestGuid<T: AsRef<str> = String>(T); pub struct RequestGuid<T: AsRef<str> = String>(Arc<T>);
impl RequestGuid { impl RequestGuid {
pub fn new() -> Self { pub fn new() -> Self {
let mut buf = [0; 40]; let mut buf = [0; 40];
rand::thread_rng().fill_bytes(&mut buf); rand::thread_rng().fill_bytes(&mut buf);
RequestGuid(base32::encode( RequestGuid(Arc::new(base32::encode(
base32::Alphabet::RFC4648 { padding: false }, base32::Alphabet::RFC4648 { padding: false },
&buf, &buf,
)) )))
} }
pub fn from(r: &str) -> Option<RequestGuid> { pub fn from(r: &str) -> Option<RequestGuid> {
@@ -26,7 +33,7 @@ impl RequestGuid {
return None; return None;
} }
} }
Some(RequestGuid(r.to_owned())) Some(RequestGuid(Arc::new(r.to_owned())))
} }
} }
#[test] #[test]
@@ -39,15 +46,71 @@ fn parse_guid() {
impl<T: AsRef<str>> std::fmt::Display for RequestGuid<T> { impl<T: AsRef<str>> std::fmt::Display for RequestGuid<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.0.as_ref().fmt(f) (&*self.0).as_ref().fmt(f)
} }
} }
pub struct RpcContinuation { pub type RestHandler = Box<
pub created_at: Instant, dyn FnOnce(Request<Body>) -> BoxFuture<'static, Result<Response<Body>, crate::Error>> + Send,
pub handler: Box< >;
dyn FnOnce(Request<Body>) -> BoxFuture<'static, Result<Response<Body>, crate::Error>>
+ Send pub type WebSocketHandler = Box<
+ Sync, dyn FnOnce(
>, BoxFuture<'static, Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
) -> BoxFuture<'static, Result<(), Error>>
+ Send,
>;
pub enum RpcContinuation {
Rest(TimedResource<RestHandler>),
WebSocket(TimedResource<WebSocketHandler>),
}
impl RpcContinuation {
pub fn rest(handler: RestHandler, timeout: Duration) -> Self {
RpcContinuation::Rest(TimedResource::new(handler, timeout))
}
pub fn ws(handler: WebSocketHandler, timeout: Duration) -> Self {
RpcContinuation::WebSocket(TimedResource::new(handler, timeout))
}
pub fn is_timed_out(&self) -> bool {
match self {
RpcContinuation::Rest(a) => a.is_timed_out(),
RpcContinuation::WebSocket(a) => a.is_timed_out(),
}
}
pub async fn into_handler(self) -> Option<RestHandler> {
match self {
RpcContinuation::Rest(handler) => handler.get().await,
RpcContinuation::WebSocket(handler) => {
if let Some(handler) = handler.get().await {
Some(Box::new(
|req: Request<Body>| -> BoxFuture<'static, Result<Response<Body>, Error>> {
async move {
let (parts, body) = req.into_parts();
let req = Request::from_parts(parts, body);
let (res, ws_fut) = hyper_ws_listener::create_ws(req)
.with_kind(crate::ErrorKind::Network)?;
if let Some(ws_fut) = ws_fut {
tokio::task::spawn(async move {
match handler(ws_fut.boxed()).await {
Ok(()) => (),
Err(e) => {
tracing::error!("WebSocket Closed: {}", e);
tracing::debug!("{:?}", e);
}
}
});
}
Ok(res)
}
.boxed()
},
))
} else {
None
}
}
}
}
} }

View File

@@ -1,92 +1,61 @@
pub mod model; pub mod model;
pub mod package; pub mod package;
pub mod util;
use std::borrow::Cow;
use std::future::Future; use std::future::Future;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration;
use color_eyre::eyre::eyre;
use futures::{FutureExt, SinkExt, StreamExt}; use futures::{FutureExt, SinkExt, StreamExt};
use patch_db::json_ptr::JsonPointer; use patch_db::json_ptr::JsonPointer;
use patch_db::{Dump, Revision}; use patch_db::{Dump, Revision};
use rpc_toolkit::command; use rpc_toolkit::command;
use rpc_toolkit::hyper::upgrade::Upgraded; use rpc_toolkit::hyper::upgrade::Upgraded;
use rpc_toolkit::hyper::{Body, Error as HyperError, Request, Response}; use rpc_toolkit::hyper::{Body, Error as HyperError, Request, Response};
use rpc_toolkit::yajrc::{GenericRpcMethod, RpcError, RpcResponse}; use rpc_toolkit::yajrc::RpcError;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value; use serde_json::Value;
use tokio::sync::{broadcast, oneshot}; use tokio::sync::oneshot;
use tokio::task::JoinError; use tokio::task::JoinError;
use tokio_tungstenite::tungstenite::protocol::frame::coding::CloseCode;
use tokio_tungstenite::tungstenite::protocol::CloseFrame;
use tokio_tungstenite::tungstenite::Message; use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::WebSocketStream; use tokio_tungstenite::WebSocketStream;
use tracing::instrument; use tracing::instrument;
pub use self::model::DatabaseModel; pub use self::model::DatabaseModel;
use self::util::WithRevision;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::middleware::auth::{HasValidSession, HashSessionToken}; use crate::middleware::auth::{HasValidSession, HashSessionToken};
use crate::util::serde::{display_serializable, IoFormat}; use crate::util::serde::{display_serializable, IoFormat};
use crate::{Error, ResultExt}; use crate::{Error, ResultExt};
#[instrument(skip(ctx, ws_fut))] #[instrument(skip(ctx, session, ws_fut))]
async fn ws_handler< async fn ws_handler<
WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>, WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
>( >(
ctx: RpcContext, ctx: RpcContext,
session: Option<(HasValidSession, HashSessionToken)>,
ws_fut: WSFut, ws_fut: WSFut,
) -> Result<(), Error> { ) -> Result<(), Error> {
let (dump, sub) = ctx.db.dump_and_sub().await; let (dump, sub) = ctx.db.dump_and_sub().await?;
let mut stream = ws_fut let mut stream = ws_fut
.await .await
.with_kind(crate::ErrorKind::Network)? .with_kind(crate::ErrorKind::Network)?
.with_kind(crate::ErrorKind::Unknown)?; .with_kind(crate::ErrorKind::Unknown)?;
let (has_valid_session, token) = loop { if let Some((session, token)) = session {
if let Some(Message::Text(cookie)) = stream let kill = subscribe_to_session_kill(&ctx, token).await;
.next() send_dump(session, &mut stream, dump).await?;
deal_with_messages(session, kill, sub, stream).await?;
} else {
stream
.close(Some(CloseFrame {
code: CloseCode::Error,
reason: "UNAUTHORIZED".into(),
}))
.await .await
.transpose() .with_kind(crate::ErrorKind::Network)?;
.with_kind(crate::ErrorKind::Network)? }
{
let cookie_str = serde_json::from_str::<Cow<str>>(&cookie)
.with_kind(crate::ErrorKind::Deserialization)?;
let id = basic_cookies::Cookie::parse(&cookie_str)
.with_kind(crate::ErrorKind::Authorization)?
.into_iter()
.find(|c| c.get_name() == "session")
.ok_or_else(|| {
Error::new(eyre!("UNAUTHORIZED"), crate::ErrorKind::Authorization)
})?;
let authenticated_session = HashSessionToken::from_cookie(&id);
match HasValidSession::from_session(&authenticated_session, &ctx).await {
Err(e) => {
stream
.send(Message::Text(
serde_json::to_string(
&RpcResponse::<GenericRpcMethod<String>>::from_result(Err::<
_,
RpcError,
>(
e.into()
)),
)
.with_kind(crate::ErrorKind::Serialization)?,
))
.await
.with_kind(crate::ErrorKind::Network)?;
return Ok(());
}
Ok(has_validation) => break (has_validation, authenticated_session),
}
}
};
let kill = subscribe_to_session_kill(&ctx, token).await;
send_dump(has_valid_session, &mut stream, dump).await?;
deal_with_messages(has_valid_session, kill, sub, stream).await?;
Ok(()) Ok(())
} }
@@ -108,46 +77,32 @@ async fn subscribe_to_session_kill(
async fn deal_with_messages( async fn deal_with_messages(
_has_valid_authentication: HasValidSession, _has_valid_authentication: HasValidSession,
mut kill: oneshot::Receiver<()>, mut kill: oneshot::Receiver<()>,
mut sub: broadcast::Receiver<Arc<Revision>>, mut sub: patch_db::Subscriber,
mut stream: WebSocketStream<Upgraded>, mut stream: WebSocketStream<Upgraded>,
) -> Result<(), Error> { ) -> Result<(), Error> {
loop { loop {
futures::select! { futures::select! {
_ = (&mut kill).fuse() => { _ = (&mut kill).fuse() => {
tracing::info!("Closing WebSocket: Reason: Session Terminated"); tracing::info!("Closing WebSocket: Reason: Session Terminated");
stream
.close(Some(CloseFrame {
code: CloseCode::Error,
reason: "UNAUTHORIZED".into(),
}))
.await
.with_kind(crate::ErrorKind::Network)?;
return Ok(()) return Ok(())
} }
new_rev = sub.recv().fuse() => { new_rev = sub.recv().fuse() => {
let rev = new_rev.with_kind(crate::ErrorKind::Database)?; let rev = new_rev.expect("UNREACHABLE: patch-db is dropped");
stream stream
.send(Message::Text( .send(Message::Text(serde_json::to_string(&rev).with_kind(crate::ErrorKind::Serialization)?))
serde_json::to_string(
&RpcResponse::<GenericRpcMethod<String>>::from_result(Ok::<_, RpcError>(
serde_json::to_value(&rev).with_kind(crate::ErrorKind::Serialization)?,
)),
)
.with_kind(crate::ErrorKind::Serialization)?,
))
.await .await
.with_kind(crate::ErrorKind::Network)?; .with_kind(crate::ErrorKind::Network)?;
} }
message = stream.next().fuse() => { message = stream.next().fuse() => {
let message = message.transpose().with_kind(crate::ErrorKind::Network)?; let message = message.transpose().with_kind(crate::ErrorKind::Network)?;
match message { match message {
Some(Message::Ping(a)) => {
stream
.send(Message::Pong(a))
.await
.with_kind(crate::ErrorKind::Network)?;
}
Some(Message::Close(frame)) => {
if let Some(reason) = frame.as_ref() {
tracing::info!("Closing WebSocket: Reason: {} {}", reason.code, reason.reason);
} else {
tracing::info!("Closing WebSocket: Reason: Unknown");
}
return Ok(())
}
None => { None => {
tracing::info!("Closing WebSocket: Stream Finished"); tracing::info!("Closing WebSocket: Stream Finished");
return Ok(()) return Ok(())
@@ -155,12 +110,6 @@ async fn deal_with_messages(
_ => (), _ => (),
} }
} }
_ = tokio::time::sleep(Duration::from_secs(10)).fuse() => {
stream
.send(Message::Ping(Vec::new()))
.await
.with_kind(crate::ErrorKind::Network)?;
}
} }
} }
} }
@@ -172,13 +121,7 @@ async fn send_dump(
) -> Result<(), Error> { ) -> Result<(), Error> {
stream stream
.send(Message::Text( .send(Message::Text(
serde_json::to_string(&RpcResponse::<GenericRpcMethod<String>>::from_result(Ok::< serde_json::to_string(&dump).with_kind(crate::ErrorKind::Serialization)?,
_,
RpcError,
>(
serde_json::to_value(&dump).with_kind(crate::ErrorKind::Serialization)?,
)))
.with_kind(crate::ErrorKind::Serialization)?,
)) ))
.await .await
.with_kind(crate::ErrorKind::Network)?; .with_kind(crate::ErrorKind::Network)?;
@@ -187,11 +130,27 @@ async fn send_dump(
pub async fn subscribe(ctx: RpcContext, req: Request<Body>) -> Result<Response<Body>, Error> { pub async fn subscribe(ctx: RpcContext, req: Request<Body>) -> Result<Response<Body>, Error> {
let (parts, body) = req.into_parts(); let (parts, body) = req.into_parts();
let session = match async {
let token = HashSessionToken::from_request_parts(&parts)?;
let session = HasValidSession::from_session(&token, &ctx).await?;
Ok::<_, Error>((session, token))
}
.await
{
Ok(a) => Some(a),
Err(e) => {
if e.kind != crate::ErrorKind::Authorization {
tracing::error!("Error Authenticating Websocket: {}", e);
tracing::debug!("{:?}", e);
}
None
}
};
let req = Request::from_parts(parts, body); let req = Request::from_parts(parts, body);
let (res, ws_fut) = hyper_ws_listener::create_ws(req).with_kind(crate::ErrorKind::Network)?; let (res, ws_fut) = hyper_ws_listener::create_ws(req).with_kind(crate::ErrorKind::Network)?;
if let Some(ws_fut) = ws_fut { if let Some(ws_fut) = ws_fut {
tokio::task::spawn(async move { tokio::task::spawn(async move {
match ws_handler(ctx, ws_fut).await { match ws_handler(ctx, session, ws_fut).await {
Ok(()) => (), Ok(()) => (),
Err(e) => { Err(e) => {
tracing::error!("WebSocket Closed: {}", e); tracing::error!("WebSocket Closed: {}", e);
@@ -223,24 +182,11 @@ pub async fn revisions(
#[allow(unused_variables)] #[allow(unused_variables)]
#[arg(long = "format")] #[arg(long = "format")]
format: Option<IoFormat>, format: Option<IoFormat>,
) -> Result<RevisionsRes, RpcError> { ) -> Result<RevisionsRes, Error> {
let cache = ctx.revision_cache.read().await; Ok(match ctx.db.sync(since).await? {
if cache Ok(revs) => RevisionsRes::Revisions(revs),
.front() Err(dump) => RevisionsRes::Dump(dump),
.map(|rev| rev.id <= since + 1) })
.unwrap_or(false)
{
Ok(RevisionsRes::Revisions(
cache
.iter()
.skip_while(|rev| rev.id < since + 1)
.cloned()
.collect(),
))
} else {
drop(cache);
Ok(RevisionsRes::Dump(ctx.db.dump().await))
}
} }
#[command(display(display_serializable))] #[command(display(display_serializable))]
@@ -249,8 +195,8 @@ pub async fn dump(
#[allow(unused_variables)] #[allow(unused_variables)]
#[arg(long = "format")] #[arg(long = "format")]
format: Option<IoFormat>, format: Option<IoFormat>,
) -> Result<Dump, RpcError> { ) -> Result<Dump, Error> {
Ok(ctx.db.dump().await) Ok(ctx.db.dump().await?)
} }
#[command(subcommands(ui))] #[command(subcommands(ui))]
@@ -267,13 +213,11 @@ pub async fn ui(
#[allow(unused_variables)] #[allow(unused_variables)]
#[arg(long = "format")] #[arg(long = "format")]
format: Option<IoFormat>, format: Option<IoFormat>,
) -> Result<WithRevision<()>, Error> { ) -> Result<(), Error> {
let ptr = "/ui" let ptr = "/ui"
.parse::<JsonPointer>() .parse::<JsonPointer>()
.with_kind(crate::ErrorKind::Database)? .with_kind(crate::ErrorKind::Database)?
+ &pointer; + &pointer;
Ok(WithRevision { ctx.db.put(&ptr, &value).await?;
response: (), Ok(())
revision: ctx.db.put(&ptr, &value, None).await?,
})
} }

View File

@@ -12,6 +12,7 @@ use serde_json::Value;
use torut::onion::TorSecretKeyV3; use torut::onion::TorSecretKeyV3;
use crate::config::spec::{PackagePointerSpec, SystemPointerSpec}; use crate::config::spec::{PackagePointerSpec, SystemPointerSpec};
use crate::hostname::{generate_hostname, generate_id};
use crate::install::progress::InstallProgress; use crate::install::progress::InstallProgress;
use crate::net::interface::InterfaceId; use crate::net::interface::InterfaceId;
use crate::s9pk::manifest::{Manifest, ManifestModel, PackageId}; use crate::s9pk::manifest::{Manifest, ManifestModel, PackageId};
@@ -32,26 +33,25 @@ pub struct Database {
pub ui: Value, pub ui: Value,
} }
impl Database { impl Database {
pub fn init( pub fn init(tor_key: &TorSecretKeyV3, password_hash: String) -> Self {
id: String, let id = generate_id();
hostname: &str, let my_hostname = generate_hostname();
tor_key: &TorSecretKeyV3, let lan_address = my_hostname.lan_address().parse().unwrap();
password_hash: String,
) -> Self {
// TODO // TODO
Database { Database {
server_info: ServerInfo { server_info: ServerInfo {
id, id,
version: Current::new().semver().into(), version: Current::new().semver().into(),
hostname: Some(my_hostname.0),
last_backup: None, last_backup: None,
last_wifi_region: None, last_wifi_region: None,
eos_version_compat: Current::new().compat().clone(), eos_version_compat: Current::new().compat().clone(),
lan_address: format!("https://{}.local", hostname).parse().unwrap(), lan_address,
tor_address: format!("http://{}", tor_key.public().get_onion_address()) tor_address: format!("http://{}", tor_key.public().get_onion_address())
.parse() .parse()
.unwrap(), .unwrap(),
status_info: ServerStatus { status_info: ServerStatus {
backing_up: false, backup_progress: None,
updated: false, updated: false,
update_progress: None, update_progress: None,
}, },
@@ -69,7 +69,8 @@ impl Database {
}, },
package_data: AllPackageData::default(), package_data: AllPackageData::default(),
recovered_packages: BTreeMap::new(), recovered_packages: BTreeMap::new(),
ui: Value::Object(Default::default()), ui: serde_json::from_str(include_str!("../../../frontend/patchdb-ui-seed.json"))
.unwrap(),
} }
} }
} }
@@ -83,6 +84,7 @@ impl DatabaseModel {
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct ServerInfo { pub struct ServerInfo {
pub id: String, pub id: String,
pub hostname: Option<String>,
pub version: Version, pub version: Version,
pub last_backup: Option<DateTime<Utc>>, pub last_backup: Option<DateTime<Utc>>,
/// Used in the wifi to determine the region to set the system to /// Used in the wifi to determine the region to set the system to
@@ -99,10 +101,16 @@ pub struct ServerInfo {
pub password_hash: String, pub password_hash: String,
} }
#[derive(Debug, Default, Deserialize, Serialize, HasModel)]
pub struct BackupProgress {
pub complete: bool,
}
#[derive(Debug, Default, Deserialize, Serialize, HasModel)] #[derive(Debug, Default, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct ServerStatus { pub struct ServerStatus {
pub backing_up: bool, #[model]
pub backup_progress: Option<BTreeMap<PackageId, BackupProgress>>,
pub updated: bool, pub updated: bool,
#[model] #[model]
pub update_progress: Option<UpdateProgress>, pub update_progress: Option<UpdateProgress>,
@@ -260,17 +268,66 @@ pub struct InstalledPackageDataEntry {
#[model] #[model]
pub manifest: Manifest, pub manifest: Manifest,
pub last_backup: Option<DateTime<Utc>>, pub last_backup: Option<DateTime<Utc>>,
#[model]
pub system_pointers: Vec<SystemPointerSpec>, pub system_pointers: Vec<SystemPointerSpec>,
#[model] #[model]
pub dependency_info: BTreeMap<PackageId, StaticDependencyInfo>, pub dependency_info: BTreeMap<PackageId, StaticDependencyInfo>,
#[model] #[model]
pub current_dependents: BTreeMap<PackageId, CurrentDependencyInfo>, pub current_dependents: CurrentDependents,
#[model] #[model]
pub current_dependencies: BTreeMap<PackageId, CurrentDependencyInfo>, pub current_dependencies: CurrentDependencies,
#[model] #[model]
pub interface_addresses: InterfaceAddressMap, pub interface_addresses: InterfaceAddressMap,
} }
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct CurrentDependents(pub BTreeMap<PackageId, CurrentDependencyInfo>);
impl CurrentDependents {
pub fn map(
mut self,
transform: impl Fn(
BTreeMap<PackageId, CurrentDependencyInfo>,
) -> BTreeMap<PackageId, CurrentDependencyInfo>,
) -> Self {
self.0 = transform(self.0);
self
}
}
impl Map for CurrentDependents {
type Key = PackageId;
type Value = CurrentDependencyInfo;
fn get(&self, key: &Self::Key) -> Option<&Self::Value> {
self.0.get(key)
}
}
impl HasModel for CurrentDependents {
type Model = MapModel<Self>;
}
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
pub struct CurrentDependencies(pub BTreeMap<PackageId, CurrentDependencyInfo>);
impl CurrentDependencies {
pub fn map(
mut self,
transform: impl Fn(
BTreeMap<PackageId, CurrentDependencyInfo>,
) -> BTreeMap<PackageId, CurrentDependencyInfo>,
) -> Self {
self.0 = transform(self.0);
self
}
}
impl Map for CurrentDependencies {
type Key = PackageId;
type Value = CurrentDependencyInfo;
fn get(&self, key: &Self::Key) -> Option<&Self::Value> {
self.0.get(key)
}
}
impl HasModel for CurrentDependencies {
type Model = MapModel<Self>;
}
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel)] #[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct StaticDependencyInfo { pub struct StaticDependencyInfo {

View File

@@ -1,25 +1,75 @@
use patch_db::DbHandle; use patch_db::{DbHandle, LockReceipt, LockTargetId, LockType, Verifier};
use crate::s9pk::manifest::{Manifest, PackageId}; use crate::s9pk::manifest::{Manifest, PackageId};
use crate::Error; use crate::Error;
pub async fn get_packages<Db: DbHandle>(db: &mut Db) -> Result<Vec<PackageId>, Error> { pub struct PackageReceipts {
let packages = crate::db::DatabaseModel::new() package_data: LockReceipt<super::model::AllPackageData, ()>,
.package_data() }
.get(db, false)
.await?; impl PackageReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(locks: &mut Vec<LockTargetId>) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let package_data = crate::db::DatabaseModel::new()
.package_data()
.make_locker(LockType::Read)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
package_data: package_data.verify(&skeleton_key)?,
})
}
}
}
pub async fn get_packages<Db: DbHandle>(
db: &mut Db,
receipts: &PackageReceipts,
) -> Result<Vec<PackageId>, Error> {
let packages = receipts.package_data.get(db).await?;
Ok(packages.0.keys().cloned().collect()) Ok(packages.0.keys().cloned().collect())
} }
pub struct ManifestReceipts {
manifest: LockReceipt<Manifest, String>,
}
impl ManifestReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle, id: &PackageId) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, id);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<LockTargetId>,
_id: &PackageId,
) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let manifest = crate::db::DatabaseModel::new()
.package_data()
.star()
.manifest()
.make_locker(LockType::Read)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
manifest: manifest.verify(&skeleton_key)?,
})
}
}
}
pub async fn get_manifest<Db: DbHandle>( pub async fn get_manifest<Db: DbHandle>(
db: &mut Db, db: &mut Db,
pkg: &PackageId, pkg: &PackageId,
receipts: &ManifestReceipts,
) -> Result<Option<Manifest>, Error> { ) -> Result<Option<Manifest>, Error> {
let mpde = crate::db::DatabaseModel::new() Ok(receipts.manifest.get(db, pkg).await?)
.package_data()
.idx_model(pkg)
.get(db, false)
.await?
.into_owned();
Ok(mpde.map(|pde| pde.manifest()))
} }

View File

@@ -1,10 +0,0 @@
use std::sync::Arc;
use patch_db::Revision;
use serde::{Deserialize, Serialize};
#[derive(Clone, Debug, Deserialize, Serialize)]
pub struct WithRevision<T> {
pub response: T,
pub revision: Option<Arc<Revision>>,
}

View File

@@ -6,19 +6,20 @@ use color_eyre::eyre::eyre;
use emver::VersionRange; use emver::VersionRange;
use futures::future::BoxFuture; use futures::future::BoxFuture;
use futures::FutureExt; use futures::FutureExt;
use patch_db::{DbHandle, HasModel, LockType, Map, MapModel, PatchDbHandle}; use patch_db::{
DbHandle, HasModel, LockReceipt, LockTargetId, LockType, Map, MapModel, PatchDbHandle, Verifier,
};
use rand::SeedableRng; use rand::SeedableRng;
use rpc_toolkit::command; use rpc_toolkit::command;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tracing::instrument; use tracing::instrument;
use crate::action::{ActionImplementation, NoOutput}; use crate::config::action::{ConfigActions, ConfigRes};
use crate::config::action::ConfigRes;
use crate::config::spec::PackagePointerSpec; use crate::config::spec::PackagePointerSpec;
use crate::config::{Config, ConfigSpec}; use crate::config::{not_found, Config, ConfigReceipts, ConfigSpec};
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::db::model::{CurrentDependencyInfo, InstalledPackageDataEntry}; use crate::db::model::{CurrentDependencies, CurrentDependents, InstalledPackageDataEntry};
use crate::error::ResultExt; use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::{Manifest, PackageId}; use crate::s9pk::manifest::{Manifest, PackageId};
use crate::status::health_check::{HealthCheckId, HealthCheckResult}; use crate::status::health_check::{HealthCheckId, HealthCheckResult};
use crate::status::{MainStatus, Status}; use crate::status::{MainStatus, Status};
@@ -55,6 +56,72 @@ pub enum DependencyError {
Transitive, // { "type": "transitive" } Transitive, // { "type": "transitive" }
} }
#[derive(Clone)]
pub struct TryHealReceipts {
status: LockReceipt<Status, String>,
manifest: LockReceipt<Manifest, String>,
manifest_version: LockReceipt<Version, String>,
current_dependencies: LockReceipt<CurrentDependencies, String>,
dependency_errors: LockReceipt<DependencyErrors, String>,
}
impl TryHealReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(locks: &mut Vec<LockTargetId>) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let manifest_version = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest().version())
.make_locker(LockType::Write)
.add_to_keys(locks);
let status = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.status())
.make_locker(LockType::Write)
.add_to_keys(locks);
let manifest = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest())
.make_locker(LockType::Write)
.add_to_keys(locks);
let current_dependencies = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.current_dependencies())
.make_locker(LockType::Write)
.add_to_keys(locks);
let dependency_errors = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.status().dependency_errors())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
status: status.verify(skeleton_key)?,
manifest_version: manifest_version.verify(skeleton_key)?,
current_dependencies: current_dependencies.verify(skeleton_key)?,
manifest: manifest.verify(skeleton_key)?,
dependency_errors: dependency_errors.verify(skeleton_key)?,
})
}
}
}
impl DependencyError { impl DependencyError {
pub fn cmp_priority(&self, other: &DependencyError) -> std::cmp::Ordering { pub fn cmp_priority(&self, other: &DependencyError) -> std::cmp::Ordering {
use std::cmp::Ordering::*; use std::cmp::Ordering::*;
@@ -114,7 +181,7 @@ impl DependencyError {
(DependencyError::Transitive, _) => DependencyError::Transitive, (DependencyError::Transitive, _) => DependencyError::Transitive,
} }
} }
#[instrument(skip(ctx, db))] #[instrument(skip(ctx, db, receipts))]
pub fn try_heal<'a, Db: DbHandle>( pub fn try_heal<'a, Db: DbHandle>(
self, self,
ctx: &'a RpcContext, ctx: &'a RpcContext,
@@ -123,42 +190,33 @@ impl DependencyError {
dependency: &'a PackageId, dependency: &'a PackageId,
mut dependency_config: Option<Config>, mut dependency_config: Option<Config>,
info: &'a DepInfo, info: &'a DepInfo,
receipts: &'a TryHealReceipts,
) -> BoxFuture<'a, Result<Option<Self>, Error>> { ) -> BoxFuture<'a, Result<Option<Self>, Error>> {
async move { async move {
Ok(match self { Ok(match self {
DependencyError::NotInstalled => { DependencyError::NotInstalled => {
if crate::db::DatabaseModel::new() if receipts.status.get(db, dependency).await?.is_some() {
.package_data()
.idx_model(dependency)
.and_then(|m| m.installed())
.exists(db, true)
.await?
{
DependencyError::IncorrectVersion { DependencyError::IncorrectVersion {
expected: info.version.clone(), expected: info.version.clone(),
received: Default::default(), received: Default::default(),
} }
.try_heal(ctx, db, id, dependency, dependency_config, info) .try_heal(ctx, db, id, dependency, dependency_config, info, receipts)
.await? .await?
} else { } else {
Some(DependencyError::NotInstalled) Some(DependencyError::NotInstalled)
} }
} }
DependencyError::IncorrectVersion { expected, .. } => { DependencyError::IncorrectVersion { expected, .. } => {
let version: Version = crate::db::DatabaseModel::new() let version: Version = receipts
.package_data() .manifest_version
.idx_model(dependency) .get(db, dependency)
.and_then(|m| m.installed())
.map(|m| m.manifest().version())
.get(db, true)
.await? .await?
.into_owned()
.unwrap_or_default(); .unwrap_or_default();
if version.satisfies(&expected) { if version.satisfies(&expected) {
DependencyError::ConfigUnsatisfied { DependencyError::ConfigUnsatisfied {
error: String::new(), error: String::new(),
} }
.try_heal(ctx, db, id, dependency, dependency_config, info) .try_heal(ctx, db, id, dependency, dependency_config, info, receipts)
.await? .await?
} else { } else {
Some(DependencyError::IncorrectVersion { Some(DependencyError::IncorrectVersion {
@@ -168,24 +226,14 @@ impl DependencyError {
} }
} }
DependencyError::ConfigUnsatisfied { .. } => { DependencyError::ConfigUnsatisfied { .. } => {
let dependent_manifest = crate::db::DatabaseModel::new() let dependent_manifest =
.package_data() receipts.manifest.get(db, id).await?.ok_or_else(not_found)?;
.idx_model(id) let dependency_manifest = receipts
.and_then(|m| m.installed()) .manifest
.map::<_, Manifest>(|m| m.manifest()) .get(db, dependency)
.expect(db)
.await? .await?
.get(db, true) .ok_or_else(not_found)?;
.await?;
let dependency_manifest = crate::db::DatabaseModel::new()
.package_data()
.idx_model(dependency)
.and_then(|m| m.installed())
.map::<_, Manifest>(|m| m.manifest())
.expect(db)
.await?
.get(db, true)
.await?;
let dependency_config = if let Some(cfg) = dependency_config.take() { let dependency_config = if let Some(cfg) = dependency_config.take() {
cfg cfg
} else if let Some(cfg_info) = &dependency_manifest.config { } else if let Some(cfg_info) = &dependency_manifest.config {
@@ -209,6 +257,7 @@ impl DependencyError {
id, id,
&dependent_manifest.version, &dependent_manifest.version,
&dependent_manifest.volumes, &dependent_manifest.volumes,
dependency,
&dependency_config, &dependency_config,
) )
.await? .await?
@@ -217,40 +266,39 @@ impl DependencyError {
} }
} }
DependencyError::NotRunning DependencyError::NotRunning
.try_heal(ctx, db, id, dependency, Some(dependency_config), info) .try_heal(
ctx,
db,
id,
dependency,
Some(dependency_config),
info,
receipts,
)
.await? .await?
} }
DependencyError::NotRunning => { DependencyError::NotRunning => {
let status = crate::db::DatabaseModel::new() let status = receipts
.package_data() .status
.idx_model(dependency) .get(db, dependency)
.and_then(|m| m.installed())
.map::<_, Status>(|m| m.status())
.expect(db)
.await? .await?
.get(db, true) .ok_or_else(not_found)?;
.await?;
if status.main.running() { if status.main.running() {
DependencyError::HealthChecksFailed { DependencyError::HealthChecksFailed {
failures: BTreeMap::new(), failures: BTreeMap::new(),
} }
.try_heal(ctx, db, id, dependency, dependency_config, info) .try_heal(ctx, db, id, dependency, dependency_config, info, receipts)
.await? .await?
} else { } else {
Some(DependencyError::NotRunning) Some(DependencyError::NotRunning)
} }
} }
DependencyError::HealthChecksFailed { .. } => { DependencyError::HealthChecksFailed { .. } => {
let status = crate::db::DatabaseModel::new() let status = receipts
.package_data() .status
.idx_model(dependency) .get(db, dependency)
.and_then(|m| m.installed())
.map::<_, Status>(|m| m.status())
.expect(db)
.await? .await?
.get(db, true) .ok_or_else(not_found)?;
.await?
.into_owned();
match status.main { match status.main {
MainStatus::BackingUp { MainStatus::BackingUp {
started: Some(_), started: Some(_),
@@ -260,19 +308,14 @@ impl DependencyError {
let mut failures = BTreeMap::new(); let mut failures = BTreeMap::new();
for (check, res) in health { for (check, res) in health {
if !matches!(res, HealthCheckResult::Success) if !matches!(res, HealthCheckResult::Success)
&& crate::db::DatabaseModel::new() && receipts
.package_data() .current_dependencies
.idx_model(id) .get(db, id)
.and_then(|m| m.installed())
.and_then::<_, CurrentDependencyInfo>(|m| {
m.current_dependencies().idx_model(dependency)
})
.get(db, true)
.await? .await?
.into_owned() .ok_or_else(not_found)?
.map(|i| i.health_checks) .get(dependency)
.unwrap_or_default() .map(|x| x.health_checks.contains(&check))
.contains(&check) .unwrap_or(false)
{ {
failures.insert(check.clone(), res.clone()); failures.insert(check.clone(), res.clone());
} }
@@ -281,27 +324,39 @@ impl DependencyError {
Some(DependencyError::HealthChecksFailed { failures }) Some(DependencyError::HealthChecksFailed { failures })
} else { } else {
DependencyError::Transitive DependencyError::Transitive
.try_heal(ctx, db, id, dependency, dependency_config, info) .try_heal(
ctx,
db,
id,
dependency,
dependency_config,
info,
receipts,
)
.await? .await?
} }
} }
MainStatus::Starting => { MainStatus::Starting { .. } | MainStatus::Restarting => {
DependencyError::Transitive DependencyError::Transitive
.try_heal(ctx, db, id, dependency, dependency_config, info) .try_heal(
ctx,
db,
id,
dependency,
dependency_config,
info,
receipts,
)
.await? .await?
} }
_ => return Ok(Some(DependencyError::NotRunning)), _ => return Ok(Some(DependencyError::NotRunning)),
} }
} }
DependencyError::Transitive => { DependencyError::Transitive => {
if crate::db::DatabaseModel::new() if receipts
.package_data() .dependency_errors
.idx_model(dependency) .get(db, dependency)
.and_then(|m| m.installed())
.map::<_, DependencyErrors>(|m| m.status().dependency_errors())
.get(db, true)
.await? .await?
.into_owned()
.unwrap_or_default() .unwrap_or_default()
.0 .0
.is_empty() .is_empty()
@@ -406,6 +461,7 @@ impl DepInfo {
dependency_id: &PackageId, dependency_id: &PackageId,
dependency_config: Option<Config>, // fetch if none dependency_config: Option<Config>, // fetch if none
dependent_id: &PackageId, dependent_id: &PackageId,
receipts: &TryHealReceipts,
) -> Result<Result<(), DependencyError>, Error> { ) -> Result<Result<(), DependencyError>, Error> {
Ok( Ok(
if let Some(err) = DependencyError::NotInstalled if let Some(err) = DependencyError::NotInstalled
@@ -416,6 +472,7 @@ impl DepInfo {
dependency_id, dependency_id,
dependency_config, dependency_config,
self, self,
receipts,
) )
.await? .await?
{ {
@@ -430,8 +487,8 @@ impl DepInfo {
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)] #[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct DependencyConfig { pub struct DependencyConfig {
check: ActionImplementation, check: PackageProcedure,
auto_configure: ActionImplementation, auto_configure: PackageProcedure,
} }
impl DependencyConfig { impl DependencyConfig {
pub async fn check( pub async fn check(
@@ -440,6 +497,7 @@ impl DependencyConfig {
dependent_id: &PackageId, dependent_id: &PackageId,
dependent_version: &Version, dependent_version: &Version,
dependent_volumes: &Volumes, dependent_volumes: &Volumes,
dependency_id: &PackageId,
dependency_config: &Config, dependency_config: &Config,
) -> Result<Result<NoOutput, String>, Error> { ) -> Result<Result<NoOutput, String>, Error> {
Ok(self Ok(self
@@ -451,6 +509,7 @@ impl DependencyConfig {
dependent_volumes, dependent_volumes,
Some(dependency_config), Some(dependency_config),
None, None,
ProcedureName::Check(dependency_id.clone()),
) )
.await? .await?
.map_err(|(_, e)| e)) .map_err(|(_, e)| e))
@@ -471,12 +530,97 @@ impl DependencyConfig {
dependent_volumes, dependent_volumes,
Some(old), Some(old),
None, None,
ProcedureName::AutoConfig(dependent_id.clone()),
) )
.await? .await?
.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure)) .map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))
} }
} }
pub struct DependencyConfigReceipts {
config: ConfigReceipts,
dependencies: LockReceipt<Dependencies, ()>,
dependency_volumes: LockReceipt<Volumes, ()>,
dependency_version: LockReceipt<Version, ()>,
dependency_config_action: LockReceipt<ConfigActions, ()>,
package_volumes: LockReceipt<Volumes, ()>,
package_version: LockReceipt<Version, ()>,
}
impl DependencyConfigReceipts {
pub async fn new<'a>(
db: &'a mut impl DbHandle,
package_id: &PackageId,
dependency_id: &PackageId,
) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, package_id, dependency_id);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<LockTargetId>,
package_id: &PackageId,
dependency_id: &PackageId,
) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let config = ConfigReceipts::setup(locks);
let dependencies = crate::db::DatabaseModel::new()
.package_data()
.idx_model(package_id)
.and_then(|x| x.installed())
.map(|x| x.manifest().dependencies())
.make_locker(LockType::Write)
.add_to_keys(locks);
let dependency_volumes = crate::db::DatabaseModel::new()
.package_data()
.idx_model(dependency_id)
.and_then(|x| x.installed())
.map(|x| x.manifest().volumes())
.make_locker(LockType::Write)
.add_to_keys(locks);
let dependency_version = crate::db::DatabaseModel::new()
.package_data()
.idx_model(dependency_id)
.and_then(|x| x.installed())
.map(|x| x.manifest().version())
.make_locker(LockType::Write)
.add_to_keys(locks);
let dependency_config_action = crate::db::DatabaseModel::new()
.package_data()
.idx_model(dependency_id)
.and_then(|x| x.installed())
.and_then(|x| x.manifest().config())
.make_locker(LockType::Write)
.add_to_keys(locks);
let package_volumes = crate::db::DatabaseModel::new()
.package_data()
.idx_model(package_id)
.and_then(|x| x.installed())
.map(|x| x.manifest().volumes())
.make_locker(LockType::Write)
.add_to_keys(locks);
let package_version = crate::db::DatabaseModel::new()
.package_data()
.idx_model(package_id)
.and_then(|x| x.installed())
.map(|x| x.manifest().version())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
config: config(skeleton_key)?,
dependencies: dependencies.verify(&skeleton_key)?,
dependency_volumes: dependency_volumes.verify(&skeleton_key)?,
dependency_version: dependency_version.verify(&skeleton_key)?,
dependency_config_action: dependency_config_action.verify(&skeleton_key)?,
package_volumes: package_volumes.verify(&skeleton_key)?,
package_version: package_version.verify(&skeleton_key)?,
})
}
}
}
#[command( #[command(
subcommands(self(configure_impl(async)), configure_dry), subcommands(self(configure_impl(async)), configure_dry),
display(display_none) display(display_none)
@@ -493,11 +637,14 @@ pub async fn configure_impl(
(pkg_id, dep_id): (PackageId, PackageId), (pkg_id, dep_id): (PackageId, PackageId),
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let receipts = DependencyConfigReceipts::new(&mut db, &pkg_id, &dep_id).await?;
let ConfigDryRes { let ConfigDryRes {
old_config: _, old_config: _,
new_config, new_config,
spec: _, spec: _,
} = configure_logic(ctx.clone(), &mut db, (pkg_id, dep_id.clone())).await?; } = configure_logic(ctx.clone(), &mut db, (pkg_id, dep_id.clone()), &receipts).await?;
let locks = &receipts.config;
Ok(crate::config::configure( Ok(crate::config::configure(
&ctx, &ctx,
&mut db, &mut db,
@@ -507,6 +654,7 @@ pub async fn configure_impl(
false, false,
&mut BTreeMap::new(), &mut BTreeMap::new(),
&mut BTreeMap::new(), &mut BTreeMap::new(),
locks,
) )
.await?) .await?)
} }
@@ -526,67 +674,25 @@ pub async fn configure_dry(
#[parent_data] (pkg_id, dependency_id): (PackageId, PackageId), #[parent_data] (pkg_id, dependency_id): (PackageId, PackageId),
) -> Result<ConfigDryRes, Error> { ) -> Result<ConfigDryRes, Error> {
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
configure_logic(ctx, &mut db, (pkg_id, dependency_id)).await let receipts = DependencyConfigReceipts::new(&mut db, &pkg_id, &dependency_id).await?;
configure_logic(ctx, &mut db, (pkg_id, dependency_id), &receipts).await
} }
pub async fn configure_logic( pub async fn configure_logic(
ctx: RpcContext, ctx: RpcContext,
db: &mut PatchDbHandle, db: &mut PatchDbHandle,
(pkg_id, dependency_id): (PackageId, PackageId), (pkg_id, dependency_id): (PackageId, PackageId),
receipts: &DependencyConfigReceipts,
) -> Result<ConfigDryRes, Error> { ) -> Result<ConfigDryRes, Error> {
crate::db::DatabaseModel::new() let pkg_version = receipts.package_version.get(db).await?;
.package_data() let pkg_volumes = receipts.package_volumes.get(db).await?;
.lock(db, LockType::Read) let dependency_config_action = receipts.dependency_config_action.get(db).await?;
.await?; let dependency_version = receipts.dependency_version.get(db).await?;
let pkg_model = crate::db::DatabaseModel::new() let dependency_volumes = receipts.dependency_volumes.get(db).await?;
.package_data() let dependencies = receipts.dependencies.get(db).await?;
.idx_model(&pkg_id)
.and_then(|m| m.installed())
.expect(db)
.await
.with_kind(crate::ErrorKind::NotFound)?;
let pkg_version = pkg_model.clone().manifest().version().get(db, true).await?;
let pkg_volumes = pkg_model.clone().manifest().volumes().get(db, true).await?;
let dependency_model = crate::db::DatabaseModel::new()
.package_data()
.idx_model(&dependency_id)
.and_then(|m| m.installed())
.expect(db)
.await
.with_kind(crate::ErrorKind::NotFound)?;
let dependency_config_action = dependency_model
.clone()
.manifest()
.config()
.get(db, true)
.await?
.to_owned()
.ok_or_else(|| {
Error::new(
eyre!("{} has no config", dependency_id),
crate::ErrorKind::NotFound,
)
})?;
let dependency_version = dependency_model
.clone()
.manifest()
.version()
.get(db, true)
.await?;
let dependency_volumes = dependency_model
.clone()
.manifest()
.volumes()
.get(db, true)
.await?;
let dependencies = pkg_model
.clone()
.manifest()
.dependencies()
.get(db, true)
.await?;
let dependency = dependencies let dependency = dependencies
.0
.get(&dependency_id) .get(&dependency_id)
.ok_or_else(|| { .ok_or_else(|| {
Error::new( Error::new(
@@ -617,8 +723,8 @@ pub async fn configure_logic(
.get( .get(
&ctx, &ctx,
&dependency_id, &dependency_id,
&*dependency_version, &dependency_version,
&*dependency_volumes, &dependency_volumes,
) )
.await?; .await?;
@@ -640,6 +746,7 @@ pub async fn configure_logic(
&pkg_volumes, &pkg_volumes,
Some(&old_config), Some(&old_config),
None, None,
ProcedureName::AutoConfig(dependency_id.clone()),
) )
.await? .await?
.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))?; .map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))?;
@@ -650,29 +757,22 @@ pub async fn configure_logic(
spec, spec,
}) })
} }
#[instrument(skip(db, current_dependencies, current_dependent_receipt))]
#[instrument(skip(db, current_dependencies))] pub async fn add_dependent_to_current_dependents_lists<'a, Db: DbHandle>(
pub async fn add_dependent_to_current_dependents_lists<
'a,
Db: DbHandle,
I: IntoIterator<Item = (&'a PackageId, &'a CurrentDependencyInfo)>,
>(
db: &mut Db, db: &mut Db,
dependent_id: &PackageId, dependent_id: &PackageId,
current_dependencies: I, current_dependencies: &CurrentDependencies,
current_dependent_receipt: &LockReceipt<CurrentDependents, String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
for (dependency, dep_info) in current_dependencies { for (dependency, dep_info) in &current_dependencies.0 {
if let Some(dependency_model) = crate::db::DatabaseModel::new() if let Some(mut dependency_dependents) =
.package_data() current_dependent_receipt.get(db, dependency).await?
.idx_model(&dependency)
.and_then(|pkg| pkg.installed())
.check(db)
.await?
{ {
dependency_model dependency_dependents
.current_dependents() .0
.idx_model(dependent_id) .insert(dependent_id.clone(), dep_info.clone());
.put(db, &dep_info) current_dependent_receipt
.set(db, dependency_dependents, dependency)
.await?; .await?;
} }
} }
@@ -696,10 +796,11 @@ impl DependencyErrors {
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db, db: &mut Db,
manifest: &Manifest, manifest: &Manifest,
current_dependencies: &BTreeMap<PackageId, CurrentDependencyInfo>, current_dependencies: &CurrentDependencies,
receipts: &TryHealReceipts,
) -> Result<DependencyErrors, Error> { ) -> Result<DependencyErrors, Error> {
let mut res = BTreeMap::new(); let mut res = BTreeMap::new();
for (dependency_id, info) in current_dependencies.keys().filter_map(|dependency_id| { for (dependency_id, info) in current_dependencies.0.keys().filter_map(|dependency_id| {
manifest manifest
.dependencies .dependencies
.0 .0
@@ -707,7 +808,7 @@ impl DependencyErrors {
.map(|info| (dependency_id, info)) .map(|info| (dependency_id, info))
}) { }) {
if let Err(e) = info if let Err(e) = info
.satisfied(ctx, db, dependency_id, None, &manifest.id) .satisfied(ctx, db, dependency_id, None, &manifest.id, receipts)
.await? .await?
{ {
res.insert(dependency_id.clone(), e); res.insert(dependency_id.clone(), e);
@@ -735,49 +836,86 @@ pub async fn break_all_dependents_transitive<'a, Db: DbHandle>(
id: &'a PackageId, id: &'a PackageId,
error: DependencyError, error: DependencyError,
breakages: &'a mut BTreeMap<PackageId, TaggedDependencyError>, breakages: &'a mut BTreeMap<PackageId, TaggedDependencyError>,
receipts: &'a BreakTransitiveReceipts,
) -> Result<(), Error> { ) -> Result<(), Error> {
for dependent in crate::db::DatabaseModel::new() for dependent in receipts
.package_data() .current_dependents
.idx_model(id) .get(db, id)
.and_then(|m| m.installed())
.expect(db)
.await? .await?
.current_dependents() .iter()
.keys(db, true) .flat_map(|x| x.0.keys())
.await? .filter(|dependent| id != *dependent)
.into_iter()
.filter(|dependent| id != dependent)
{ {
break_transitive(db, &dependent, id, error.clone(), breakages).await?; break_transitive(db, dependent, id, error.clone(), breakages, receipts).await?;
} }
Ok(()) Ok(())
} }
#[instrument(skip(db))] #[derive(Clone)]
pub struct BreakTransitiveReceipts {
pub dependency_receipt: DependencyReceipt,
dependency_errors: LockReceipt<DependencyErrors, String>,
current_dependents: LockReceipt<CurrentDependents, String>,
}
impl BreakTransitiveReceipts {
pub async fn new(db: &'_ mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(locks: &mut Vec<LockTargetId>) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let dependency_receipt = DependencyReceipt::setup(locks);
let dependency_errors = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.status().dependency_errors())
.make_locker(LockType::Write)
.add_to_keys(locks);
let current_dependents = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.current_dependents())
.make_locker(LockType::Exist)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
dependency_receipt: dependency_receipt(skeleton_key)?,
dependency_errors: dependency_errors.verify(skeleton_key)?,
current_dependents: current_dependents.verify(skeleton_key)?,
})
}
}
}
#[instrument(skip(db, receipts))]
pub fn break_transitive<'a, Db: DbHandle>( pub fn break_transitive<'a, Db: DbHandle>(
db: &'a mut Db, db: &'a mut Db,
id: &'a PackageId, id: &'a PackageId,
dependency: &'a PackageId, dependency: &'a PackageId,
error: DependencyError, error: DependencyError,
breakages: &'a mut BTreeMap<PackageId, TaggedDependencyError>, breakages: &'a mut BTreeMap<PackageId, TaggedDependencyError>,
receipts: &'a BreakTransitiveReceipts,
) -> BoxFuture<'a, Result<(), Error>> { ) -> BoxFuture<'a, Result<(), Error>> {
async move { async move {
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let model = crate::db::DatabaseModel::new() let mut dependency_errors = receipts
.package_data() .dependency_errors
.idx_model(id) .get(&mut tx, id)
.and_then(|m| m.installed()) .await?
.expect(&mut tx) .ok_or_else(not_found)?;
.await?;
let mut status = model.clone().status().get_mut(&mut tx).await?;
let old = status.dependency_errors.0.remove(dependency); let old = dependency_errors.0.remove(dependency);
let newly_broken = if let Some(e) = &old { let newly_broken = if let Some(e) = &old {
error.cmp_priority(&e) == Ordering::Greater error.cmp_priority(&e) == Ordering::Greater
} else { } else {
true true
}; };
status.dependency_errors.0.insert( dependency_errors.0.insert(
dependency.clone(), dependency.clone(),
if let Some(old) = old { if let Some(old) = old {
old.merge_with(error.clone()) old.merge_with(error.clone())
@@ -793,12 +931,25 @@ pub fn break_transitive<'a, Db: DbHandle>(
error: error.clone(), error: error.clone(),
}, },
); );
status.save(&mut tx).await?; receipts
.dependency_errors
.set(&mut tx, dependency_errors, id)
.await?;
tx.save().await?; tx.save().await?;
break_all_dependents_transitive(db, id, DependencyError::Transitive, breakages).await?; break_all_dependents_transitive(
db,
id,
DependencyError::Transitive,
breakages,
receipts,
)
.await?;
} else { } else {
status.save(&mut tx).await?; receipts
.dependency_errors
.set(&mut tx, dependency_errors, id)
.await?;
tx.save().await?; tx.save().await?;
} }
@@ -808,68 +959,52 @@ pub fn break_transitive<'a, Db: DbHandle>(
.boxed() .boxed()
} }
#[instrument(skip(ctx, db))] #[instrument(skip(ctx, db, locks))]
pub async fn heal_all_dependents_transitive<'a, Db: DbHandle>( pub async fn heal_all_dependents_transitive<'a, Db: DbHandle>(
ctx: &'a RpcContext, ctx: &'a RpcContext,
db: &'a mut Db, db: &'a mut Db,
id: &'a PackageId, id: &'a PackageId,
locks: &'a DependencyReceipt,
) -> Result<(), Error> { ) -> Result<(), Error> {
for dependent in crate::db::DatabaseModel::new() let dependents = locks
.package_data() .current_dependents
.idx_model(id) .get(db, id)
.and_then(|m| m.installed())
.expect(db)
.await? .await?
.current_dependents() .ok_or_else(not_found)?;
.keys(db, true) for dependent in dependents.0.keys().filter(|dependent| id != *dependent) {
.await? heal_transitive(ctx, db, dependent, id, locks).await?;
.into_iter()
.filter(|dependent| id != dependent)
{
heal_transitive(ctx, db, &dependent, id).await?;
} }
Ok(()) Ok(())
} }
#[instrument(skip(ctx, db))] #[instrument(skip(ctx, db, receipts))]
pub fn heal_transitive<'a, Db: DbHandle>( pub fn heal_transitive<'a, Db: DbHandle>(
ctx: &'a RpcContext, ctx: &'a RpcContext,
db: &'a mut Db, db: &'a mut Db,
id: &'a PackageId, id: &'a PackageId,
dependency: &'a PackageId, dependency: &'a PackageId,
receipts: &'a DependencyReceipt,
) -> BoxFuture<'a, Result<(), Error>> { ) -> BoxFuture<'a, Result<(), Error>> {
async move { async move {
let mut tx = db.begin().await?; let mut status = receipts.status.get(db, id).await?.ok_or_else(not_found)?;
let model = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|m| m.installed())
.expect(&mut tx)
.await?;
let mut status = model.clone().status().get_mut(&mut tx).await?;
let old = status.dependency_errors.0.remove(dependency); let old = status.dependency_errors.0.remove(dependency);
if let Some(old) = old { if let Some(old) = old {
let info = model let info = receipts
.manifest() .dependency
.dependencies() .get(db, (id, dependency))
.idx_model(dependency)
.expect(&mut tx)
.await? .await?
.get(&mut tx, true) .ok_or_else(not_found)?;
.await?;
if let Some(new) = old if let Some(new) = old
.try_heal(ctx, &mut tx, id, dependency, None, &*info) .try_heal(ctx, db, id, dependency, None, &info, &receipts.try_heal)
.await? .await?
{ {
status.dependency_errors.0.insert(dependency.clone(), new); status.dependency_errors.0.insert(dependency.clone(), new);
status.save(&mut tx).await?; receipts.status.set(db, status, id).await?;
tx.save().await?;
} else { } else {
status.save(&mut tx).await?; receipts.status.set(db, status, id).await?;
tx.save().await?; heal_all_dependents_transitive(ctx, db, id, receipts).await?;
heal_all_dependents_transitive(ctx, db, id).await?;
} }
} }
@@ -881,11 +1016,12 @@ pub fn heal_transitive<'a, Db: DbHandle>(
pub async fn reconfigure_dependents_with_live_pointers( pub async fn reconfigure_dependents_with_live_pointers(
ctx: &RpcContext, ctx: &RpcContext,
mut tx: impl DbHandle, mut tx: impl DbHandle,
receipts: &ConfigReceipts,
pde: &InstalledPackageDataEntry, pde: &InstalledPackageDataEntry,
) -> Result<(), Error> { ) -> Result<(), Error> {
let dependents = &pde.current_dependents; let dependents = &pde.current_dependents;
let me = &pde.manifest.id; let me = &pde.manifest.id;
for (dependent_id, dependency_info) in dependents { for (dependent_id, dependency_info) in &dependents.0 {
if dependency_info.pointers.iter().any(|ptr| match ptr { if dependency_info.pointers.iter().any(|ptr| match ptr {
// dependency id matches the package being uninstalled // dependency id matches the package being uninstalled
PackagePointerSpec::TorAddress(ptr) => &ptr.package_id == me && dependent_id != me, PackagePointerSpec::TorAddress(ptr) => &ptr.package_id == me && dependent_id != me,
@@ -903,9 +1039,60 @@ pub async fn reconfigure_dependents_with_live_pointers(
false, false,
&mut BTreeMap::new(), &mut BTreeMap::new(),
&mut BTreeMap::new(), &mut BTreeMap::new(),
receipts,
) )
.await?; .await?;
} }
} }
Ok(()) Ok(())
} }
#[derive(Clone)]
pub struct DependencyReceipt {
pub try_heal: TryHealReceipts,
current_dependents: LockReceipt<CurrentDependents, String>,
status: LockReceipt<Status, String>,
dependency: LockReceipt<DepInfo, (String, String)>,
}
impl DependencyReceipt {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(locks: &mut Vec<LockTargetId>) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let try_heal = TryHealReceipts::setup(locks);
let dependency = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest().dependencies().star())
.make_locker(LockType::Read)
.add_to_keys(locks);
let current_dependents = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.current_dependents())
.make_locker(LockType::Write)
.add_to_keys(locks);
let status = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.status())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
try_heal: try_heal(skeleton_key)?,
current_dependents: current_dependents.verify(skeleton_key)?,
status: status.verify(skeleton_key)?,
dependency: dependency.verify(skeleton_key)?,
})
}
}
}

View File

@@ -2,6 +2,7 @@ use std::fs::File;
use std::io::Write; use std::io::Write;
use std::path::Path; use std::path::Path;
use ed25519::pkcs8::EncodePrivateKey;
use ed25519_dalek::Keypair; use ed25519_dalek::Keypair;
use rpc_toolkit::command; use rpc_toolkit::command;
use tracing::instrument; use tracing::instrument;
@@ -20,10 +21,19 @@ pub fn init(#[context] ctx: SdkContext) -> Result<(), Error> {
.with_ctx(|_| (crate::ErrorKind::Filesystem, parent.display().to_string()))?; .with_ctx(|_| (crate::ErrorKind::Filesystem, parent.display().to_string()))?;
} }
tracing::info!("Generating new developer key..."); tracing::info!("Generating new developer key...");
let keypair = Keypair::generate(&mut rand::thread_rng()); let keypair = Keypair::generate(&mut rand_old::thread_rng());
tracing::info!("Writing key to {}", ctx.developer_key_path.display()); tracing::info!("Writing key to {}", ctx.developer_key_path.display());
let keypair_bytes = ed25519::KeypairBytes {
secret_key: keypair.secret.to_bytes(),
public_key: Some(keypair.public.to_bytes()),
};
let mut dev_key_file = File::create(&ctx.developer_key_path)?; let mut dev_key_file = File::create(&ctx.developer_key_path)?;
dev_key_file.write_all(&keypair.to_bytes())?; dev_key_file.write_all(
keypair_bytes
.to_pkcs8_pem(base64ct::LineEnding::default())
.with_kind(crate::ErrorKind::Pem)?
.as_bytes(),
)?;
dev_key_file.sync_all()?; dev_key_file.sync_all()?;
} }
Ok(()) Ok(())

View File

@@ -6,7 +6,7 @@ use rpc_toolkit::yajrc::RpcError;
use crate::context::DiagnosticContext; use crate::context::DiagnosticContext;
use crate::disk::repair; use crate::disk::repair;
use crate::logs::{display_logs, fetch_logs, LogResponse, LogSource}; use crate::logs::{fetch_logs, LogResponse, LogSource};
use crate::shutdown::Shutdown; use crate::shutdown::Shutdown;
use crate::util::display_none; use crate::util::display_none;
use crate::Error; use crate::Error;
@@ -23,19 +23,13 @@ pub fn error(#[context] ctx: DiagnosticContext) -> Result<Arc<RpcError>, Error>
Ok(ctx.error.clone()) Ok(ctx.error.clone())
} }
#[command(display(display_logs))] #[command(rpc_only)]
pub async fn logs( pub async fn logs(
#[arg] limit: Option<usize>, #[arg] limit: Option<usize>,
#[arg] cursor: Option<String>, #[arg] cursor: Option<String>,
#[arg] before_flag: Option<bool>, #[arg] before: bool,
) -> Result<LogResponse, Error> { ) -> Result<LogResponse, Error> {
Ok(fetch_logs( Ok(fetch_logs(LogSource::Service(SYSTEMD_UNIT), limit, cursor, before).await?)
LogSource::Service(SYSTEMD_UNIT),
limit,
cursor,
before_flag.unwrap_or(false),
)
.await?)
} }
#[command(display(display_none))] #[command(display(display_none))]

View File

@@ -7,7 +7,7 @@ use futures::FutureExt;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use crate::{Error, ResultExt}; use crate::Error;
#[derive(Debug, Clone, Copy)] #[derive(Debug, Clone, Copy)]
#[must_use] #[must_use]

View File

@@ -11,7 +11,7 @@ use crate::disk::mount::filesystem::block_dev::mount;
use crate::disk::mount::filesystem::ReadWrite; use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::util::unmount; use crate::disk::mount::util::unmount;
use crate::util::Invoke; use crate::util::Invoke;
use crate::{Error, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
pub const PASSWORD_PATH: &'static str = "/etc/embassy/password"; pub const PASSWORD_PATH: &'static str = "/etc/embassy/password";
pub const DEFAULT_PASSWORD: &'static str = "password"; pub const DEFAULT_PASSWORD: &'static str = "password";
@@ -183,6 +183,7 @@ pub async fn unmount_all_fs<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<()
#[instrument(skip(datadir))] #[instrument(skip(datadir))]
pub async fn export<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<(), Error> { pub async fn export<P: AsRef<Path>>(guid: &str, datadir: P) -> Result<(), Error> {
Command::new("sync").invoke(ErrorKind::Filesystem).await?;
unmount_all_fs(guid, datadir).await?; unmount_all_fs(guid, datadir).await?;
Command::new("vgchange") Command::new("vgchange")
.arg("-an") .arg("-an")

View File

@@ -1,7 +1,7 @@
use clap::ArgMatches; use clap::ArgMatches;
use rpc_toolkit::command; use rpc_toolkit::command;
use self::util::DiskListResponse; use crate::disk::util::DiskInfo;
use crate::util::display_none; use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat}; use crate::util::serde::{display_serializable, IoFormat};
use crate::Error; use crate::Error;
@@ -9,7 +9,6 @@ use crate::Error;
pub mod fsck; pub mod fsck;
pub mod main; pub mod main;
pub mod mount; pub mod mount;
pub mod quirks;
pub mod util; pub mod util;
pub const BOOT_RW_PATH: &str = "/media/boot-rw"; pub const BOOT_RW_PATH: &str = "/media/boot-rw";
@@ -20,7 +19,7 @@ pub fn disk() -> Result<(), Error> {
Ok(()) Ok(())
} }
fn display_disk_info(info: DiskListResponse, matches: &ArgMatches<'_>) { fn display_disk_info(info: Vec<DiskInfo>, matches: &ArgMatches) {
use prettytable::*; use prettytable::*;
if matches.is_present("format") { if matches.is_present("format") {
@@ -35,7 +34,7 @@ fn display_disk_info(info: DiskListResponse, matches: &ArgMatches<'_>) {
"USED", "USED",
"EMBASSY OS VERSION" "EMBASSY OS VERSION"
]); ]);
for disk in info.disks { for disk in info {
let row = row![ let row = row![
disk.logicalname.display(), disk.logicalname.display(),
"N/A", "N/A",
@@ -71,7 +70,7 @@ fn display_disk_info(info: DiskListResponse, matches: &ArgMatches<'_>) {
table.add_row(row); table.add_row(row);
} }
} }
table.print_tty(false); table.print_tty(false).unwrap();
} }
#[command(display(display_disk_info))] #[command(display(display_disk_info))]
@@ -79,7 +78,7 @@ pub async fn list(
#[allow(unused_variables)] #[allow(unused_variables)]
#[arg] #[arg]
format: Option<IoFormat>, format: Option<IoFormat>,
) -> Result<DiskListResponse, Error> { ) -> Result<Vec<DiskInfo>, Error> {
crate::disk::util::list().await crate::disk::util::list().await
} }

View File

@@ -1,6 +1,7 @@
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use tokio::io::AsyncWriteExt; use tokio::io::AsyncWriteExt;
use tracing::instrument; use tracing::instrument;
@@ -14,9 +15,9 @@ use crate::disk::util::EmbassyOsRecoveryInfo;
use crate::middleware::encrypt::{decrypt_slice, encrypt_slice}; use crate::middleware::encrypt::{decrypt_slice, encrypt_slice};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::util::{AtomicFile, FileLock}; use crate::util::FileLock;
use crate::volume::BACKUP_DIR; use crate::volume::BACKUP_DIR;
use crate::{Error, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
pub struct BackupMountGuard<G: GenericMountGuard> { pub struct BackupMountGuard<G: GenericMountGuard> {
backup_disk_mount_guard: Option<G>, backup_disk_mount_guard: Option<G>,
@@ -162,16 +163,20 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
pub async fn save(&self) -> Result<(), Error> { pub async fn save(&self) -> Result<(), Error> {
let metadata_path = self.as_ref().join("metadata.cbor"); let metadata_path = self.as_ref().join("metadata.cbor");
let backup_disk_path = self.backup_disk_path(); let backup_disk_path = self.backup_disk_path();
let mut file = AtomicFile::new(&metadata_path).await?; let mut file = AtomicFile::new(&metadata_path, None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
file.write_all(&IoFormat::Cbor.to_vec(&self.metadata)?) file.write_all(&IoFormat::Cbor.to_vec(&self.metadata)?)
.await?; .await?;
file.save().await?; file.save().await.with_kind(ErrorKind::Filesystem)?;
let unencrypted_metadata_path = let unencrypted_metadata_path =
backup_disk_path.join("EmbassyBackups/unencrypted-metadata.cbor"); backup_disk_path.join("EmbassyBackups/unencrypted-metadata.cbor");
let mut file = AtomicFile::new(&unencrypted_metadata_path).await?; let mut file = AtomicFile::new(&unencrypted_metadata_path, None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
file.write_all(&IoFormat::Cbor.to_vec(&self.unencrypted_metadata)?) file.write_all(&IoFormat::Cbor.to_vec(&self.unencrypted_metadata)?)
.await?; .await?;
file.save().await?; file.save().await.with_kind(ErrorKind::Filesystem)?;
Ok(()) Ok(())
} }

View File

@@ -3,7 +3,7 @@ use std::path::Path;
use async_trait::async_trait; use async_trait::async_trait;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use digest::Digest; use digest::{Digest, OutputSizeUser};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::Sha256; use sha2::Sha256;
@@ -45,7 +45,9 @@ impl<LogicalName: AsRef<Path> + Send + Sync> FileSystem for BlockDev<LogicalName
) -> Result<(), Error> { ) -> Result<(), Error> {
mount(self.logicalname.as_ref(), mountpoint, mount_type).await mount(self.logicalname.as_ref(), mountpoint, mount_type).await
} }
async fn source_hash(&self) -> Result<GenericArray<u8, <Sha256 as Digest>::OutputSize>, Error> { async fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> {
let mut sha = Sha256::new(); let mut sha = Sha256::new();
sha.update("BlockDev"); sha.update("BlockDev");
sha.update( sha.update(

View File

@@ -4,7 +4,7 @@ use std::path::{Path, PathBuf};
use async_trait::async_trait; use async_trait::async_trait;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use digest::Digest; use digest::{Digest, OutputSizeUser};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use sha2::Sha256; use sha2::Sha256;
use tokio::process::Command; use tokio::process::Command;
@@ -18,7 +18,7 @@ use crate::Error;
async fn resolve_hostname(hostname: &str) -> Result<IpAddr, Error> { async fn resolve_hostname(hostname: &str) -> Result<IpAddr, Error> {
#[cfg(feature = "avahi")] #[cfg(feature = "avahi")]
if hostname.ends_with(".local") { if hostname.ends_with(".local") {
return Ok(crate::net::mdns::resolve_mdns(hostname).await?); return Ok(IpAddr::V4(crate::net::mdns::resolve_mdns(hostname).await?));
} }
Ok(String::from_utf8( Ok(String::from_utf8(
Command::new("nmblookup") Command::new("nmblookup")
@@ -93,7 +93,9 @@ impl FileSystem for Cifs {
) )
.await .await
} }
async fn source_hash(&self) -> Result<GenericArray<u8, <Sha256 as Digest>::OutputSize>, Error> { async fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> {
let mut sha = Sha256::new(); let mut sha = Sha256::new();
sha.update("Cifs"); sha.update("Cifs");
sha.update(self.hostname.as_bytes()); sha.update(self.hostname.as_bytes());

View File

@@ -4,7 +4,7 @@ use std::path::Path;
use async_trait::async_trait; use async_trait::async_trait;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use digest::Digest; use digest::{Digest, OutputSizeUser};
use sha2::Sha256; use sha2::Sha256;
use tokio::io::{AsyncReadExt, AsyncWriteExt}; use tokio::io::{AsyncReadExt, AsyncWriteExt};
@@ -63,7 +63,9 @@ impl<EncryptedDir: AsRef<Path> + Send + Sync, Key: AsRef<str> + Send + Sync> Fil
) -> Result<(), Error> { ) -> Result<(), Error> {
mount_ecryptfs(self.encrypted_dir.as_ref(), mountpoint, self.key.as_ref()).await mount_ecryptfs(self.encrypted_dir.as_ref(), mountpoint, self.key.as_ref()).await
} }
async fn source_hash(&self) -> Result<GenericArray<u8, <Sha256 as Digest>::OutputSize>, Error> { async fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> {
let mut sha = Sha256::new(); let mut sha = Sha256::new();
sha.update("EcryptFS"); sha.update("EcryptFS");
sha.update( sha.update(

View File

@@ -2,7 +2,7 @@ use std::path::Path;
use async_trait::async_trait; use async_trait::async_trait;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use digest::Digest; use digest::{Digest, OutputSizeUser};
use sha2::Sha256; use sha2::Sha256;
use super::{FileSystem, MountType, ReadOnly}; use super::{FileSystem, MountType, ReadOnly};
@@ -41,7 +41,9 @@ impl<S: AsRef<str> + Send + Sync> FileSystem for Label<S> {
) -> Result<(), Error> { ) -> Result<(), Error> {
mount_label(self.label.as_ref(), mountpoint, mount_type).await mount_label(self.label.as_ref(), mountpoint, mount_type).await
} }
async fn source_hash(&self) -> Result<GenericArray<u8, <Sha256 as Digest>::OutputSize>, Error> { async fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> {
let mut sha = Sha256::new(); let mut sha = Sha256::new();
sha.update("Label"); sha.update("Label");
sha.update(self.label.as_ref().as_bytes()); sha.update(self.label.as_ref().as_bytes());

View File

@@ -2,7 +2,7 @@ use std::path::Path;
use async_trait::async_trait; use async_trait::async_trait;
use digest::generic_array::GenericArray; use digest::generic_array::GenericArray;
use digest::Digest; use digest::OutputSizeUser;
use sha2::Sha256; use sha2::Sha256;
use crate::Error; use crate::Error;
@@ -27,5 +27,7 @@ pub trait FileSystem {
mountpoint: P, mountpoint: P,
mount_type: MountType, mount_type: MountType,
) -> Result<(), Error>; ) -> Result<(), Error>;
async fn source_hash(&self) -> Result<GenericArray<u8, <Sha256 as Digest>::OutputSize>, Error>; async fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error>;
} }

View File

@@ -48,13 +48,15 @@ pub async fn unmount<P: AsRef<Path>>(mountpoint: P) -> Result<(), Error> {
.arg(mountpoint.as_ref()) .arg(mountpoint.as_ref())
.invoke(crate::ErrorKind::Filesystem) .invoke(crate::ErrorKind::Filesystem)
.await?; .await?;
tokio::fs::remove_dir_all(mountpoint.as_ref()) match tokio::fs::remove_dir(mountpoint.as_ref()).await {
.await Err(e) if e.raw_os_error() == Some(39) => Ok(()), // directory not empty
.with_ctx(|_| { a => a,
( }
crate::ErrorKind::Filesystem, .with_ctx(|_| {
format!("rm {}", mountpoint.as_ref().display()), (
) crate::ErrorKind::Filesystem,
})?; format!("rm {}", mountpoint.as_ref().display()),
)
})?;
Ok(()) Ok(())
} }

View File

@@ -1,170 +0,0 @@
use std::collections::BTreeSet;
use std::num::ParseIntError;
use std::path::Path;
use color_eyre::eyre::eyre;
use tokio::io::AsyncWriteExt;
use tracing::instrument;
use super::BOOT_RW_PATH;
use crate::util::AtomicFile;
use crate::Error;
pub const QUIRK_PATH: &'static str = "/sys/module/usb_storage/parameters/quirks";
pub const WHITELIST: [(VendorId, ProductId); 5] = [
(VendorId(0x1d6b), ProductId(0x0002)), // root hub usb2
(VendorId(0x1d6b), ProductId(0x0003)), // root hub usb3
(VendorId(0x2109), ProductId(0x3431)),
(VendorId(0x1058), ProductId(0x262f)), // western digital black HDD
(VendorId(0x04e8), ProductId(0x4001)), // Samsung T7
];
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, PartialOrd, Ord)]
pub struct VendorId(u16);
impl std::str::FromStr for VendorId {
type Err = ParseIntError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
u16::from_str_radix(s.trim(), 16).map(VendorId)
}
}
impl std::fmt::Display for VendorId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:04x}", self.0)
}
}
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, PartialOrd, Ord)]
pub struct ProductId(u16);
impl std::str::FromStr for ProductId {
type Err = ParseIntError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
u16::from_str_radix(s.trim(), 16).map(ProductId)
}
}
impl std::fmt::Display for ProductId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{:04x}", self.0)
}
}
#[derive(Clone, Debug)]
pub struct Quirks(BTreeSet<(VendorId, ProductId)>);
impl Quirks {
pub fn add(&mut self, vendor: VendorId, product: ProductId) {
self.0.insert((vendor, product));
}
pub fn remove(&mut self, vendor: VendorId, product: ProductId) {
self.0.remove(&(vendor, product));
}
pub fn contains(&self, vendor: VendorId, product: ProductId) -> bool {
self.0.contains(&(vendor, product))
}
}
impl std::fmt::Display for Quirks {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut comma = false;
for (vendor, product) in &self.0 {
if comma {
write!(f, ",")?;
} else {
comma = true;
}
write!(f, "{}:{}:u", vendor, product)?;
}
Ok(())
}
}
impl std::str::FromStr for Quirks {
type Err = Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let s = s.trim();
let mut quirks = BTreeSet::new();
for item in s.split(",") {
if let [vendor, product, "u"] = item.splitn(3, ":").collect::<Vec<_>>().as_slice() {
quirks.insert((vendor.parse()?, product.parse()?));
} else {
return Err(Error::new(
eyre!("Invalid quirk: `{}`", item),
crate::ErrorKind::DiskManagement,
));
}
}
Ok(Quirks(quirks))
}
}
#[instrument]
pub async fn update_quirks(quirks: &mut Quirks) -> Result<Vec<String>, Error> {
let mut usb_devices = tokio::fs::read_dir("/sys/bus/usb/devices/").await?;
let mut to_reconnect = Vec::new();
while let Some(usb_device) = usb_devices.next_entry().await? {
if tokio::fs::metadata(usb_device.path().join("idVendor"))
.await
.is_err()
{
continue;
}
let vendor = tokio::fs::read_to_string(usb_device.path().join("idVendor"))
.await?
.parse()?;
let product = tokio::fs::read_to_string(usb_device.path().join("idProduct"))
.await?
.parse()?;
if WHITELIST.contains(&(vendor, product)) {
quirks.remove(vendor, product);
continue;
}
if quirks.contains(vendor, product) {
continue;
}
quirks.add(vendor, product);
{
// write quirks to sysfs
let mut quirk_file = tokio::fs::File::create(QUIRK_PATH).await?;
quirk_file.write_all(quirks.to_string().as_bytes()).await?;
quirk_file.sync_all().await?;
drop(quirk_file);
}
disconnect_usb(usb_device.path()).await?;
let (vendor_name, product_name) = tokio::try_join!(
tokio::fs::read_to_string(usb_device.path().join("manufacturer")),
tokio::fs::read_to_string(usb_device.path().join("product")),
)?;
to_reconnect.push(format!("{} {}", vendor_name, product_name));
}
Ok(to_reconnect)
}
#[instrument(skip(usb_device_path))]
pub async fn disconnect_usb(usb_device_path: impl AsRef<Path>) -> Result<(), Error> {
let authorized_path = usb_device_path.as_ref().join("bConfigurationValue");
let mut authorized_file = tokio::fs::File::create(&authorized_path).await?;
authorized_file.write_all(b"0").await?;
authorized_file.sync_all().await?;
drop(authorized_file);
Ok(())
}
#[instrument]
pub async fn fetch_quirks() -> Result<Quirks, Error> {
Ok(tokio::fs::read_to_string(QUIRK_PATH).await?.parse()?)
}
#[instrument]
pub async fn save_quirks(quirks: &Quirks) -> Result<(), Error> {
let orig_path = Path::new(BOOT_RW_PATH).join("cmdline.txt.orig");
let target_path = Path::new(BOOT_RW_PATH).join("cmdline.txt");
if tokio::fs::metadata(&orig_path).await.is_err() {
tokio::fs::copy(&target_path, &orig_path).await?;
}
let cmdline = tokio::fs::read_to_string(&orig_path).await?;
let mut target = AtomicFile::new(&target_path).await?;
target
.write_all(format!("usb-storage.quirks={} {}", quirks, cmdline).as_bytes())
.await?;
target.save().await?;
Ok(())
}

View File

@@ -19,19 +19,11 @@ use tracing::instrument;
use super::mount::filesystem::block_dev::BlockDev; use super::mount::filesystem::block_dev::BlockDev;
use super::mount::filesystem::ReadOnly; use super::mount::filesystem::ReadOnly;
use super::mount::guard::TmpMountGuard; use super::mount::guard::TmpMountGuard;
use super::quirks::{fetch_quirks, save_quirks, update_quirks};
use crate::util::io::from_yaml_async_reader; use crate::util::io::from_yaml_async_reader;
use crate::util::serde::IoFormat; use crate::util::serde::IoFormat;
use crate::util::{Invoke, Version}; use crate::util::{Invoke, Version};
use crate::{Error, ResultExt as _}; use crate::{Error, ResultExt as _};
#[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct DiskListResponse {
pub disks: Vec<DiskInfo>,
pub reconnect: Vec<String>,
}
#[derive(Clone, Debug, Deserialize, Serialize)] #[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct DiskInfo { pub struct DiskInfo {
@@ -240,10 +232,7 @@ pub async fn recovery_info(
} }
#[instrument] #[instrument]
pub async fn list() -> Result<DiskListResponse, Error> { pub async fn list() -> Result<Vec<DiskInfo>, Error> {
let mut quirks = fetch_quirks().await?;
let reconnect = update_quirks(&mut quirks).await?;
save_quirks(&mut quirks).await?;
let disk_guids = pvscan().await?; let disk_guids = pvscan().await?;
let disks = tokio_stream::wrappers::ReadDirStream::new( let disks = tokio_stream::wrappers::ReadDirStream::new(
tokio::fs::read_dir(DISK_PATH) tokio::fs::read_dir(DISK_PATH)
@@ -374,10 +363,7 @@ pub async fn list() -> Result<DiskListResponse, Error> {
}) })
} }
Ok(DiskListResponse { Ok(res)
disks: res,
reconnect,
})
} }
fn parse_pvscan_output(pvscan_output: &str) -> BTreeMap<PathBuf, Option<String>> { fn parse_pvscan_output(pvscan_output: &str) -> BTreeMap<PathBuf, Option<String>> {

View File

@@ -1,6 +1,7 @@
use std::fmt::Display; use std::fmt::Display;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use models::InvalidId;
use patch_db::Revision; use patch_db::Revision;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
@@ -30,7 +31,7 @@ pub enum ErrorKind {
InvalidOnionAddress = 22, InvalidOnionAddress = 22,
Pack = 23, Pack = 23,
ValidateS9pk = 24, ValidateS9pk = 24,
DiskCorrupted = 25, DiskCorrupted = 25, // Remove
Tor = 26, Tor = 26,
ConfigGen = 27, ConfigGen = 27,
ParseNumber = 28, ParseNumber = 28,
@@ -64,6 +65,12 @@ pub enum ErrorKind {
InvalidBackupTargetId = 56, InvalidBackupTargetId = 56,
ProductKeyMismatch = 57, ProductKeyMismatch = 57,
LanPortConflict = 58, LanPortConflict = 58,
Javascript = 59,
Pem = 60,
TLSInit = 61,
HttpRange = 62,
ContentLength = 63,
BytesError = 64
} }
impl ErrorKind { impl ErrorKind {
pub fn as_str(&self) -> &'static str { pub fn as_str(&self) -> &'static str {
@@ -126,7 +133,13 @@ impl ErrorKind {
Incoherent => "Incoherent", Incoherent => "Incoherent",
InvalidBackupTargetId => "Invalid Backup Target ID", InvalidBackupTargetId => "Invalid Backup Target ID",
ProductKeyMismatch => "Incompatible Product Keys", ProductKeyMismatch => "Incompatible Product Keys",
LanPortConflict => "Incompatible LAN port configuration", LanPortConflict => "Incompatible LAN Port Configuration",
Javascript => "Javascript Engine Error",
Pem => "PEM Encoding Error",
TLSInit => "TLS Backend Initialize Error",
HttpRange => "No Support for Web Server HTTP Ranges",
ContentLength => "Request has no content length header",
BytesError => "Could not get the bytes for this request"
} }
} }
} }
@@ -142,6 +155,7 @@ pub struct Error {
pub kind: ErrorKind, pub kind: ErrorKind,
pub revision: Option<Revision>, pub revision: Option<Revision>,
} }
impl Display for Error { impl Display for Error {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}: {}", self.kind.as_str(), self.source) write!(f, "{}: {}", self.kind.as_str(), self.source)
@@ -156,6 +170,11 @@ impl Error {
} }
} }
} }
impl From<InvalidId> for Error {
fn from(err: InvalidId) -> Self {
Error::new(err, crate::error::ErrorKind::InvalidPackageId)
}
}
impl From<std::io::Error> for Error { impl From<std::io::Error> for Error {
fn from(e: std::io::Error) -> Self { fn from(e: std::io::Error) -> Self {
Error::new(e, ErrorKind::Filesystem) Error::new(e, ErrorKind::Filesystem)
@@ -225,6 +244,8 @@ impl From<openssl::error::ErrorStack> for Error {
fn from(e: openssl::error::ErrorStack) -> Self { fn from(e: openssl::error::ErrorStack) -> Self {
Error::new(eyre!("OpenSSL ERROR:\n{}", e), ErrorKind::OpenSsl) Error::new(eyre!("OpenSSL ERROR:\n{}", e), ErrorKind::OpenSsl)
} }
} }
impl From<Error> for RpcError { impl From<Error> for RpcError {
fn from(e: Error) -> Self { fn from(e: Error) -> Self {

View File

@@ -1,34 +1,53 @@
use digest::Digest; use patch_db::DbHandle;
use tokio::fs::File; use rand::{thread_rng, Rng};
use tokio::io::AsyncWriteExt;
use tokio::process::Command; use tokio::process::Command;
use tracing::instrument; use tracing::instrument;
use crate::util::Invoke; use crate::util::Invoke;
use crate::{Error, ErrorKind, ResultExt}; use crate::{Error, ErrorKind};
#[derive(Clone, serde::Deserialize, serde::Serialize, Debug)]
pub struct Hostname(pub String);
pub const PRODUCT_KEY_PATH: &'static str = "/embassy-os/product_key.txt"; lazy_static::lazy_static! {
static ref ADJECTIVES: Vec<String> = include_str!("./assets/adjectives.txt").lines().map(|x| x.to_string()).collect();
#[instrument] static ref NOUNS: Vec<String> = include_str!("./assets/nouns.txt").lines().map(|x| x.to_string()).collect();
pub async fn get_hostname() -> Result<String, Error> { }
Ok(derive_hostname(&get_id().await?)) impl AsRef<str> for Hostname {
fn as_ref(&self) -> &str {
&self.0
}
} }
pub fn derive_hostname(id: &str) -> String { impl Hostname {
format!("embassy-{}", id) pub fn lan_address(&self) -> String {
format!("https://{}.local", self.0)
}
}
pub fn generate_hostname() -> Hostname {
let mut rng = thread_rng();
let adjective = &ADJECTIVES[rng.gen_range(0..ADJECTIVES.len())];
let noun = &NOUNS[rng.gen_range(0..NOUNS.len())];
Hostname(format!("embassy-{adjective}-{noun}"))
}
pub fn generate_id() -> String {
let id = uuid::Uuid::new_v4();
id.to_string()
} }
#[instrument] #[instrument]
pub async fn get_current_hostname() -> Result<String, Error> { pub async fn get_current_hostname() -> Result<Hostname, Error> {
let out = Command::new("hostname") let out = Command::new("hostname")
.invoke(ErrorKind::ParseSysInfo) .invoke(ErrorKind::ParseSysInfo)
.await?; .await?;
let out_string = String::from_utf8(out)?; let out_string = String::from_utf8(out)?;
Ok(out_string.trim().to_owned()) Ok(Hostname(out_string.trim().to_owned()))
} }
#[instrument] #[instrument]
pub async fn set_hostname(hostname: &str) -> Result<(), Error> { pub async fn set_hostname(hostname: &Hostname) -> Result<(), Error> {
let hostname: &String = &hostname.0;
let _out = Command::new("hostnamectl") let _out = Command::new("hostnamectl")
.arg("set-hostname") .arg("set-hostname")
.arg(hostname) .arg(hostname)
@@ -37,38 +56,36 @@ pub async fn set_hostname(hostname: &str) -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument] #[instrument(skip(handle))]
pub async fn get_product_key() -> Result<String, Error> { pub async fn get_id<Db: DbHandle>(handle: &mut Db) -> Result<String, Error> {
let out = tokio::fs::read_to_string(PRODUCT_KEY_PATH) let id = crate::db::DatabaseModel::new()
.server_info()
.id()
.get(handle, false)
.await?;
Ok(id.to_string())
}
pub async fn get_hostname<Db: DbHandle>(handle: &mut Db) -> Result<Hostname, Error> {
if let Ok(hostname) = crate::db::DatabaseModel::new()
.server_info()
.hostname()
.get(handle, false)
.await .await
.with_ctx(|_| (crate::ErrorKind::Filesystem, PRODUCT_KEY_PATH))?; {
Ok(out.trim().to_owned()) if let Some(hostname) = hostname.to_owned() {
return Ok(Hostname(hostname));
}
}
let id = get_id(handle).await?;
if id.len() != 8 {
return Ok(generate_hostname());
}
return Ok(Hostname(format!("embassy-{}", id)));
} }
#[instrument(skip(handle))]
#[instrument] pub async fn sync_hostname<Db: DbHandle>(handle: &mut Db) -> Result<(), Error> {
pub async fn set_product_key(key: &str) -> Result<(), Error> { set_hostname(&get_hostname(handle).await?).await?;
let mut pkey_file = File::create(PRODUCT_KEY_PATH).await?;
pkey_file.write_all(key.as_bytes()).await?;
Ok(())
}
pub fn derive_id(key: &str) -> String {
let mut hasher = sha2::Sha256::new();
hasher.update(key.as_bytes());
let res = hasher.finalize();
hex::encode(&res[0..4])
}
#[instrument]
pub async fn get_id() -> Result<String, Error> {
let key = get_product_key().await?;
Ok(derive_id(&key))
}
// cat /embassy-os/product_key.txt | shasum -a 256 | head -c 8 | awk '{print "embassy-"$1}' | xargs hostnamectl set-hostname && systemctl restart avahi-daemon
#[instrument]
pub async fn sync_hostname() -> Result<(), Error> {
set_hostname(&format!("embassy-{}", get_id().await?)).await?;
Command::new("systemctl") Command::new("systemctl")
.arg("restart") .arg("restart")
.arg("avahi-daemon") .arg("avahi-daemon")

View File

@@ -1,142 +1,10 @@
use std::borrow::{Borrow, Cow};
use std::fmt::Debug; use std::fmt::Debug;
use std::str::FromStr; use std::str::FromStr;
use serde::{Deserialize, Deserializer, Serialize, Serializer}; pub use models::{Id, IdUnchecked, InvalidId, SYSTEM_ID};
use serde::{Deserialize, Deserializer, Serialize};
use crate::util::Version; use crate::util::Version;
use crate::Error;
pub const SYSTEM_ID: Id<&'static str> = Id("x_system");
#[derive(Debug, thiserror::Error)]
#[error("Invalid ID")]
pub struct InvalidId;
impl From<InvalidId> for Error {
fn from(err: InvalidId) -> Self {
Error::new(err, crate::error::ErrorKind::InvalidPackageId)
}
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
pub struct IdUnchecked<S: AsRef<str>>(pub S);
impl<'de> Deserialize<'de> for IdUnchecked<Cow<'de, str>> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
struct Visitor;
impl<'de> serde::de::Visitor<'de> for Visitor {
type Value = IdUnchecked<Cow<'de, str>>;
fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(formatter, "a valid ID")
}
fn visit_str<E>(self, v: &str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(IdUnchecked(Cow::Owned(v.to_owned())))
}
fn visit_string<E>(self, v: String) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(IdUnchecked(Cow::Owned(v)))
}
fn visit_borrowed_str<E>(self, v: &'de str) -> Result<Self::Value, E>
where
E: serde::de::Error,
{
Ok(IdUnchecked(Cow::Borrowed(v)))
}
}
deserializer.deserialize_any(Visitor)
}
}
impl<'de> Deserialize<'de> for IdUnchecked<String> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
Ok(IdUnchecked(String::deserialize(deserializer)?))
}
}
impl<'de> Deserialize<'de> for IdUnchecked<&'de str> {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
Ok(IdUnchecked(<&'de str>::deserialize(deserializer)?))
}
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Id<S: AsRef<str> = String>(S);
impl<S: AsRef<str>> Id<S> {
pub fn try_from(value: S) -> Result<Self, InvalidId> {
if value
.as_ref()
.chars()
.all(|c| c.is_ascii_lowercase() || c == '-')
{
Ok(Id(value))
} else {
Err(InvalidId)
}
}
}
impl<'a> Id<&'a str> {
pub fn owned(&self) -> Id {
Id(self.0.to_owned())
}
}
impl From<Id> for String {
fn from(value: Id) -> Self {
value.0
}
}
impl<S: AsRef<str>> std::ops::Deref for Id<S> {
type Target = S;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<S: AsRef<str>> std::fmt::Display for Id<S> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.0.as_ref())
}
}
impl<S: AsRef<str>> AsRef<str> for Id<S> {
fn as_ref(&self) -> &str {
self.0.as_ref()
}
}
impl<S: AsRef<str>> Borrow<str> for Id<S> {
fn borrow(&self) -> &str {
self.0.as_ref()
}
}
impl<'de, S> Deserialize<'de> for Id<S>
where
S: AsRef<str>,
IdUnchecked<S>: Deserialize<'de>,
{
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
let unchecked: IdUnchecked<S> = Deserialize::deserialize(deserializer)?;
Id::try_from(unchecked.0).map_err(serde::de::Error::custom)
}
}
impl<S: AsRef<str>> Serialize for Id<S> {
fn serialize<Ser>(&self, serializer: Ser) -> Result<Ser::Ok, Ser::Error>
where
Ser: Serializer,
{
serializer.serialize_str(self.as_ref())
}
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize)] #[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize)]
pub struct ImageId<S: AsRef<str> = String>(Id<S>); pub struct ImageId<S: AsRef<str> = String>(Id<S>);

View File

@@ -1,14 +1,24 @@
use std::collections::HashMap;
use std::path::Path;
use std::process::Stdio;
use std::time::Duration; use std::time::Duration;
use color_eyre::eyre::eyre;
use helpers::NonDetachingJoinHandle;
use patch_db::{DbHandle, LockReceipt, LockType};
use tokio::process::Command; use tokio::process::Command;
use crate::config::util::MergeWith;
use crate::context::rpc::RpcContextConfig; use crate::context::rpc::RpcContextConfig;
use crate::db::model::ServerStatus; use crate::db::model::ServerStatus;
use crate::install::PKG_DOCKER_DIR; use crate::install::PKG_DOCKER_DIR;
use crate::sound::CIRCLE_OF_5THS_SHORT;
use crate::util::Invoke; use crate::util::Invoke;
use crate::version::VersionT;
use crate::Error; use crate::Error;
pub const SYSTEM_REBUILD_PATH: &str = "/embassy-os/system-rebuild"; pub const SYSTEM_REBUILD_PATH: &str = "/embassy-os/system-rebuild";
pub const STANDBY_MODE_PATH: &str = "/embassy-os/standby";
pub async fn check_time_is_synchronized() -> Result<bool, Error> { pub async fn check_time_is_synchronized() -> Result<bool, Error> {
Ok(String::from_utf8( Ok(String::from_utf8(
@@ -23,10 +33,213 @@ pub async fn check_time_is_synchronized() -> Result<bool, Error> {
== "NTPSynchronized=yes") == "NTPSynchronized=yes")
} }
pub async fn init(cfg: &RpcContextConfig, product_key: &str) -> Result<(), Error> { pub struct InitReceipts {
let should_rebuild = tokio::fs::metadata(SYSTEM_REBUILD_PATH).await.is_ok(); pub server_version: LockReceipt<crate::util::Version, ()>,
pub version_range: LockReceipt<emver::VersionRange, ()>,
pub last_wifi_region: LockReceipt<Option<isocountry::CountryCode>, ()>,
pub status_info: LockReceipt<ServerStatus, ()>,
}
impl InitReceipts {
pub async fn new(db: &mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let server_version = crate::db::DatabaseModel::new()
.server_info()
.version()
.make_locker(LockType::Write)
.add_to_keys(&mut locks);
let version_range = crate::db::DatabaseModel::new()
.server_info()
.eos_version_compat()
.make_locker(LockType::Write)
.add_to_keys(&mut locks);
let last_wifi_region = crate::db::DatabaseModel::new()
.server_info()
.last_wifi_region()
.make_locker(LockType::Write)
.add_to_keys(&mut locks);
let status_info = crate::db::DatabaseModel::new()
.server_info()
.status_info()
.into_model()
.make_locker(LockType::Write)
.add_to_keys(&mut locks);
let skeleton_key = db.lock_all(locks).await?;
Ok(Self {
server_version: server_version.verify(&skeleton_key)?,
version_range: version_range.verify(&skeleton_key)?,
status_info: status_info.verify(&skeleton_key)?,
last_wifi_region: last_wifi_region.verify(&skeleton_key)?,
})
}
}
pub async fn pgloader(
old_db_path: impl AsRef<Path>,
batch_rows: usize,
prefetch_rows: usize,
) -> Result<(), Error> {
tokio::fs::write(
"/etc/embassy/migrate.load",
format!(
include_str!("migrate.load"),
sqlite_path = old_db_path.as_ref().display(),
batch_rows = batch_rows,
prefetch_rows = prefetch_rows
),
)
.await?;
match tokio::fs::remove_dir_all("/tmp/pgloader").await {
Err(e) if e.kind() == std::io::ErrorKind::NotFound => Ok(()),
a => a,
}?;
tracing::info!("Running pgloader");
let out = Command::new("pgloader")
.arg("-v")
.arg("/etc/embassy/migrate.load")
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.output()
.await?;
let stdout = String::from_utf8(out.stdout)?;
for line in stdout.lines() {
tracing::debug!("pgloader: {}", line);
}
let stderr = String::from_utf8(out.stderr)?;
for line in stderr.lines() {
tracing::debug!("pgloader err: {}", line);
}
tracing::debug!("pgloader exited with code {:?}", out.status);
if let Some(err) = stdout.lines().chain(stderr.lines()).find_map(|l| {
if l.split_ascii_whitespace()
.any(|word| word == "ERROR" || word == "FATAL")
{
Some(l)
} else {
None
}
}) {
return Err(Error::new(
eyre!("pgloader error: {}", err),
crate::ErrorKind::Database,
));
}
tokio::fs::rename(
old_db_path.as_ref(),
old_db_path.as_ref().with_extension("bak"),
)
.await?;
Ok(())
}
// must be idempotent
pub async fn init_postgres(datadir: impl AsRef<Path>) -> Result<(), Error> {
let db_dir = datadir.as_ref().join("main/postgresql");
let is_mountpoint = || async {
Ok::<_, Error>(
tokio::process::Command::new("mountpoint")
.arg("/var/lib/postgresql")
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.status()
.await?
.success(),
)
};
let exists = tokio::fs::metadata(&db_dir).await.is_ok();
if !exists {
Command::new("cp")
.arg("-ra")
.arg("/var/lib/postgresql")
.arg(&db_dir)
.invoke(crate::ErrorKind::Filesystem)
.await?;
}
if !is_mountpoint().await? {
crate::disk::mount::util::bind(&db_dir, "/var/lib/postgresql", false).await?;
}
Command::new("chown")
.arg("-R")
.arg("postgres")
.arg("/var/lib/postgresql")
.invoke(crate::ErrorKind::Database)
.await?;
Command::new("systemctl")
.arg("start")
.arg("postgresql")
.invoke(crate::ErrorKind::Database)
.await?;
if !exists {
Command::new("sudo")
.arg("-u")
.arg("postgres")
.arg("createuser")
.arg("root")
.invoke(crate::ErrorKind::Database)
.await?;
Command::new("sudo")
.arg("-u")
.arg("postgres")
.arg("createdb")
.arg("secrets")
.arg("-O")
.arg("root")
.invoke(crate::ErrorKind::Database)
.await?;
}
Ok(())
}
pub struct InitResult {
pub db: patch_db::PatchDb,
}
pub async fn init(cfg: &RpcContextConfig) -> Result<InitResult, Error> {
let secret_store = cfg.secret_store().await?; let secret_store = cfg.secret_store().await?;
let log_dir = cfg.datadir().join("main").join("logs"); let db = cfg.db(&secret_store).await?;
let mut handle = db.handle();
crate::db::DatabaseModel::new()
.server_info()
.lock(&mut handle, LockType::Write)
.await?;
let defaults: serde_json::Value =
serde_json::from_str(include_str!("../../frontend/patchdb-ui-seed.json")).map_err(|x| {
Error::new(
eyre!("Deserialization error {:?}", x),
crate::ErrorKind::Deserialization,
)
})?;
let mut ui = crate::db::DatabaseModel::new()
.ui()
.get(&mut handle, false)
.await?
.clone();
ui.merge_with(&defaults);
crate::db::DatabaseModel::new()
.ui()
.put(&mut handle, &ui)
.await?;
let receipts = InitReceipts::new(&mut handle).await?;
let should_rebuild = tokio::fs::metadata(SYSTEM_REBUILD_PATH).await.is_ok()
|| &*receipts.server_version.get(&mut handle).await?
< &crate::version::Current::new().semver();
let song = if should_rebuild {
Some(NonDetachingJoinHandle::from(tokio::spawn(async {
loop {
CIRCLE_OF_5THS_SHORT.play().await.unwrap();
tokio::time::sleep(Duration::from_secs(10)).await;
}
})))
} else {
None
};
let log_dir = cfg.datadir().join("main/logs");
if tokio::fs::metadata(&log_dir).await.is_err() { if tokio::fs::metadata(&log_dir).await.is_err() {
tokio::fs::create_dir_all(&log_dir).await?; tokio::fs::create_dir_all(&log_dir).await?;
} }
@@ -48,7 +261,7 @@ pub async fn init(cfg: &RpcContextConfig, product_key: &str) -> Result<(), Error
tokio::fs::remove_dir_all(&tmp_docker).await?; tokio::fs::remove_dir_all(&tmp_docker).await?;
} }
Command::new("cp") Command::new("cp")
.arg("-r") .arg("-ra")
.arg("/var/lib/docker") .arg("/var/lib/docker")
.arg(&tmp_docker) .arg(&tmp_docker)
.invoke(crate::ErrorKind::Filesystem) .invoke(crate::ErrorKind::Filesystem)
@@ -73,6 +286,28 @@ pub async fn init(cfg: &RpcContextConfig, product_key: &str) -> Result<(), Error
tracing::info!("Mounted Docker Data"); tracing::info!("Mounted Docker Data");
if should_rebuild || !tmp_docker_exists { if should_rebuild || !tmp_docker_exists {
tracing::info!("Creating Docker Network");
bollard::Docker::connect_with_unix_defaults()?
.create_network(bollard::network::CreateNetworkOptions {
name: "start9",
driver: "bridge",
ipam: bollard::models::Ipam {
config: Some(vec![bollard::models::IpamConfig {
subnet: Some("172.18.0.1/24".into()),
..Default::default()
}]),
..Default::default()
},
options: {
let mut m = HashMap::new();
m.insert("com.docker.network.bridge.name", "br-start9");
m
},
..Default::default()
})
.await?;
tracing::info!("Created Docker Network");
tracing::info!("Loading System Docker Images"); tracing::info!("Loading System Docker Images");
crate::install::load_images("/var/lib/embassy/system-images").await?; crate::install::load_images("/var/lib/embassy/system-images").await?;
tracing::info!("Loaded System Docker Images"); tracing::info!("Loaded System Docker Images");
@@ -82,38 +317,38 @@ pub async fn init(cfg: &RpcContextConfig, product_key: &str) -> Result<(), Error
tracing::info!("Loaded Package Docker Images"); tracing::info!("Loaded Package Docker Images");
} }
crate::ssh::sync_keys_from_db(&secret_store, "/root/.ssh/authorized_keys").await?; tracing::info!("Enabling Docker QEMU Emulation");
tracing::info!("Synced SSH Keys"); Command::new("docker")
let db = cfg.db(&secret_store, product_key).await?; .arg("run")
.arg("--privileged")
.arg("--rm")
.arg("start9/x_system/binfmt")
.arg("--install")
.arg("all")
.invoke(crate::ErrorKind::Docker)
.await?;
tracing::info!("Enabled Docker QEMU Emulation");
let mut handle = db.handle(); crate::ssh::sync_keys_from_db(&secret_store, "/home/start9/.ssh/authorized_keys").await?;
tracing::info!("Synced SSH Keys");
crate::net::wifi::synchronize_wpa_supplicant_conf( crate::net::wifi::synchronize_wpa_supplicant_conf(
&cfg.datadir().join("main"), &cfg.datadir().join("main"),
&*crate::db::DatabaseModel::new() &receipts.last_wifi_region.get(&mut handle).await?,
.server_info()
.last_wifi_region()
.get(&mut handle, false)
.await
.map_err(|_e| {
Error::new(
color_eyre::eyre::eyre!("Could not find the last wifi region"),
crate::ErrorKind::NotFound,
)
})?,
) )
.await?; .await?;
tracing::info!("Synchronized wpa_supplicant.conf"); tracing::info!("Synchronized wpa_supplicant.conf");
let mut info = crate::db::DatabaseModel::new() receipts
.server_info() .status_info
.get_mut(&mut handle) .set(
&mut handle,
ServerStatus {
updated: false,
update_progress: None,
backup_progress: None,
},
)
.await?; .await?;
info.status_info = ServerStatus {
backing_up: false,
updated: false,
update_progress: None,
};
info.save(&mut handle).await?;
let mut warn_time_not_synced = true; let mut warn_time_not_synced = true;
for _ in 0..60 { for _ in 0..60 {
@@ -125,13 +360,23 @@ pub async fn init(cfg: &RpcContextConfig, product_key: &str) -> Result<(), Error
} }
if warn_time_not_synced { if warn_time_not_synced {
tracing::warn!("Timed out waiting for system time to synchronize"); tracing::warn!("Timed out waiting for system time to synchronize");
} else {
tracing::info!("Syncronized system clock");
} }
crate::version::init(&mut handle).await?; crate::version::init(&mut handle, &receipts).await?;
if should_rebuild { if should_rebuild {
tokio::fs::remove_file(SYSTEM_REBUILD_PATH).await?; match tokio::fs::remove_file(SYSTEM_REBUILD_PATH).await {
Ok(()) => Ok(()),
Err(e) if e.kind() == std::io::ErrorKind::NotFound => Ok(()),
Err(e) => Err(e),
}?;
} }
Ok(()) drop(song);
tracing::info!("System initialized.");
Ok(InitResult { db })
} }

View File

@@ -1,74 +1,96 @@
use std::collections::{BTreeMap, HashMap}; use std::collections::HashMap;
use bollard::image::ListImagesOptions; use bollard::image::ListImagesOptions;
use color_eyre::eyre::eyre; use patch_db::{DbHandle, LockReceipt, LockTargetId, LockType, PatchDbHandle, Verifier};
use patch_db::{DbHandle, LockType, PatchDbHandle}; use sqlx::{Executor, Postgres};
use sqlx::{Executor, Sqlite};
use tracing::instrument; use tracing::instrument;
use super::{PKG_ARCHIVE_DIR, PKG_DOCKER_DIR}; use super::{PKG_ARCHIVE_DIR, PKG_DOCKER_DIR};
use crate::config::{not_found, ConfigReceipts};
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::db::model::{CurrentDependencyInfo, InstalledPackageDataEntry, PackageDataEntry}; use crate::db::model::{
use crate::dependencies::reconfigure_dependents_with_live_pointers; AllPackageData, CurrentDependencies, CurrentDependents, InstalledPackageDataEntry,
PackageDataEntry,
};
use crate::dependencies::{
reconfigure_dependents_with_live_pointers, DependencyErrors, TryHealReceipts,
};
use crate::error::ErrorCollection; use crate::error::ErrorCollection;
use crate::s9pk::manifest::{Manifest, PackageId}; use crate::s9pk::manifest::{Manifest, PackageId};
use crate::util::{Apply, Version}; use crate::util::{Apply, Version};
use crate::volume::{asset_dir, script_dir};
use crate::Error; use crate::Error;
#[instrument(skip(ctx, db, deps))] pub struct UpdateDependencyReceipts {
pub async fn update_dependency_errors_of_dependents< try_heal: TryHealReceipts,
'a, dependency_errors: LockReceipt<DependencyErrors, String>,
Db: DbHandle, manifest: LockReceipt<Manifest, String>,
I: IntoIterator<Item = &'a PackageId>, }
>( impl UpdateDependencyReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(locks: &mut Vec<LockTargetId>) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let dependency_errors = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.status().dependency_errors())
.make_locker(LockType::Write)
.add_to_keys(locks);
let manifest = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest())
.make_locker(LockType::Write)
.add_to_keys(locks);
let try_heal = TryHealReceipts::setup(locks);
move |skeleton_key| {
Ok(Self {
dependency_errors: dependency_errors.verify(skeleton_key)?,
manifest: manifest.verify(skeleton_key)?,
try_heal: try_heal(skeleton_key)?,
})
}
}
}
#[instrument(skip(ctx, db, deps, receipts))]
pub async fn update_dependency_errors_of_dependents<'a, Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db, db: &mut Db,
id: &PackageId, id: &PackageId,
deps: I, deps: &CurrentDependents,
receipts: &UpdateDependencyReceipts,
) -> Result<(), Error> { ) -> Result<(), Error> {
for dep in deps { for dep in deps.0.keys() {
if let Some(man) = &*crate::db::DatabaseModel::new() if let Some(man) = receipts.manifest.get(db, dep).await? {
.package_data()
.idx_model(&dep)
.and_then(|m| m.installed())
.map::<_, Manifest>(|m| m.manifest())
.get(db, true)
.await?
{
if let Err(e) = if let Some(info) = man.dependencies.0.get(id) { if let Err(e) = if let Some(info) = man.dependencies.0.get(id) {
info.satisfied(ctx, db, id, None, dep).await? info.satisfied(ctx, db, id, None, dep, &receipts.try_heal)
.await?
} else { } else {
Ok(()) Ok(())
} { } {
let mut errs = crate::db::DatabaseModel::new() let mut errs = receipts
.package_data() .dependency_errors
.idx_model(&dep) .get(db, dep)
.expect(db)
.await? .await?
.installed() .ok_or_else(not_found)?;
.expect(db)
.await?
.status()
.dependency_errors()
.get_mut(db)
.await?;
errs.0.insert(id.clone(), e); errs.0.insert(id.clone(), e);
errs.save(db).await?; receipts.dependency_errors.set(db, errs, dep).await?
} else { } else {
let mut errs = crate::db::DatabaseModel::new() let mut errs = receipts
.package_data() .dependency_errors
.idx_model(&dep) .get(db, dep)
.expect(db)
.await? .await?
.installed() .ok_or_else(not_found)?;
.expect(db)
.await?
.status()
.dependency_errors()
.get_mut(db)
.await?;
errs.0.remove(id); errs.0.remove(id);
errs.save(db).await?; receipts.dependency_errors.set(db, errs, dep).await?
} }
} }
} }
@@ -97,10 +119,20 @@ pub async fn cleanup(ctx: &RpcContext, id: &PackageId, version: &Version) -> Res
.await .await
.apply(|res| errors.handle(res)); .apply(|res| errors.handle(res));
errors.extend( errors.extend(
futures::future::join_all(images.into_iter().flatten().map(|image| async { futures::future::join_all(
let image = image; // move into future images
ctx.docker.remove_image(&image.id, None, None).await .into_iter()
})) .flatten()
.flat_map(|image| image.repo_tags)
.filter(|tag| {
tag.starts_with(&format!("start9/{}/", id))
&& tag.ends_with(&format!(":{}", version))
})
.map(|tag| async {
let tag = tag; // move into future
ctx.docker.remove_image(&tag, None, None).await
}),
)
.await, .await,
); );
let pkg_archive_dir = ctx let pkg_archive_dir = ctx
@@ -123,28 +155,66 @@ pub async fn cleanup(ctx: &RpcContext, id: &PackageId, version: &Version) -> Res
.await .await
.apply(|res| errors.handle(res)); .apply(|res| errors.handle(res));
} }
let assets_path = asset_dir(&ctx.datadir, id, version);
if tokio::fs::metadata(&assets_path).await.is_ok() {
tokio::fs::remove_dir_all(&assets_path)
.await
.apply(|res| errors.handle(res));
}
let scripts_path = script_dir(&ctx.datadir, id, version);
if tokio::fs::metadata(&scripts_path).await.is_ok() {
tokio::fs::remove_dir_all(&scripts_path)
.await
.apply(|res| errors.handle(res));
}
errors.into_result() errors.into_result()
} }
#[instrument(skip(ctx, db))] pub struct CleanupFailedReceipts {
package_data_entry: LockReceipt<PackageDataEntry, String>,
package_entries: LockReceipt<AllPackageData, ()>,
}
impl CleanupFailedReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(locks: &mut Vec<LockTargetId>) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let package_data_entry = crate::db::DatabaseModel::new()
.package_data()
.star()
.make_locker(LockType::Write)
.add_to_keys(locks);
let package_entries = crate::db::DatabaseModel::new()
.package_data()
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
package_data_entry: package_data_entry.verify(skeleton_key).unwrap(),
package_entries: package_entries.verify(skeleton_key).unwrap(),
})
}
}
}
#[instrument(skip(ctx, db, receipts))]
pub async fn cleanup_failed<Db: DbHandle>( pub async fn cleanup_failed<Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
db: &mut Db, db: &mut Db,
id: &PackageId, id: &PackageId,
receipts: &CleanupFailedReceipts,
) -> Result<(), Error> { ) -> Result<(), Error> {
crate::db::DatabaseModel::new() let pde = receipts
.package_data() .package_data_entry
.lock(db, LockType::Write) .get(db, id)
.await?;
let pde = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.expect(db)
.await? .await?
.get(db, true) .ok_or_else(not_found)?;
.await?
.into_owned();
if let Some(manifest) = match &pde { if let Some(manifest) = match &pde {
PackageDataEntry::Installing { manifest, .. } PackageDataEntry::Installing { manifest, .. }
| PackageDataEntry::Restoring { manifest, .. } => Some(manifest), | PackageDataEntry::Restoring { manifest, .. } => Some(manifest),
@@ -173,26 +243,25 @@ pub async fn cleanup_failed<Db: DbHandle>(
match pde { match pde {
PackageDataEntry::Installing { .. } | PackageDataEntry::Restoring { .. } => { PackageDataEntry::Installing { .. } | PackageDataEntry::Restoring { .. } => {
crate::db::DatabaseModel::new() let mut entries = receipts.package_entries.get(db).await?;
.package_data() entries.0.remove(id);
.remove(db, id) receipts.package_entries.set(db, entries).await?;
.await?;
} }
PackageDataEntry::Updating { PackageDataEntry::Updating {
installed, installed,
static_files, static_files,
.. ..
} => { } => {
crate::db::DatabaseModel::new() receipts
.package_data() .package_data_entry
.idx_model(id) .set(
.put(
db, db,
&PackageDataEntry::Installed { PackageDataEntry::Installed {
manifest: installed.manifest.clone(), manifest: installed.manifest.clone(),
installed, installed,
static_files, static_files,
}, },
id,
) )
.await?; .await?;
} }
@@ -202,38 +271,74 @@ pub async fn cleanup_failed<Db: DbHandle>(
Ok(()) Ok(())
} }
#[instrument(skip(db, current_dependencies))] #[instrument(skip(db, current_dependencies, current_dependent_receipt))]
pub async fn remove_from_current_dependents_lists< pub async fn remove_from_current_dependents_lists<'a, Db: DbHandle>(
'a,
Db: DbHandle,
I: IntoIterator<Item = &'a PackageId>,
>(
db: &mut Db, db: &mut Db,
id: &'a PackageId, id: &'a PackageId,
current_dependencies: I, current_dependencies: &'a CurrentDependencies,
current_dependent_receipt: &LockReceipt<CurrentDependents, String>,
) -> Result<(), Error> { ) -> Result<(), Error> {
for dep in current_dependencies.into_iter().chain(std::iter::once(id)) { for dep in current_dependencies.0.keys().chain(std::iter::once(id)) {
if let Some(current_dependents) = crate::db::DatabaseModel::new() if let Some(mut current_dependents) = current_dependent_receipt.get(db, dep).await? {
.package_data() if current_dependents.0.remove(id).is_some() {
.idx_model(dep) current_dependent_receipt
.and_then(|m| m.installed()) .set(db, current_dependents, dep)
.map::<_, BTreeMap<PackageId, CurrentDependencyInfo>>(|m| m.current_dependents()) .await?;
.check(db)
.await?
{
if current_dependents
.clone()
.idx_model(id)
.exists(db, true)
.await?
{
current_dependents.remove(db, id).await?
} }
} }
} }
Ok(()) Ok(())
} }
pub struct UninstallReceipts {
config: ConfigReceipts,
removing: LockReceipt<InstalledPackageDataEntry, ()>,
packages: LockReceipt<AllPackageData, ()>,
current_dependents: LockReceipt<CurrentDependents, String>,
update_depenency_receipts: UpdateDependencyReceipts,
}
impl UninstallReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle, id: &PackageId) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, id);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<LockTargetId>,
id: &PackageId,
) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let config = ConfigReceipts::setup(locks);
let removing = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|pde| pde.removing())
.make_locker(LockType::Write)
.add_to_keys(locks);
let current_dependents = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.current_dependents())
.make_locker(LockType::Write)
.add_to_keys(locks);
let packages = crate::db::DatabaseModel::new()
.package_data()
.make_locker(LockType::Write)
.add_to_keys(locks);
let update_depenency_receipts = UpdateDependencyReceipts::setup(locks);
move |skeleton_key| {
Ok(Self {
config: config(skeleton_key)?,
removing: removing.verify(skeleton_key)?,
current_dependents: current_dependents.verify(skeleton_key)?,
update_depenency_receipts: update_depenency_receipts(skeleton_key)?,
packages: packages.verify(skeleton_key)?,
})
}
}
}
#[instrument(skip(ctx, secrets, db))] #[instrument(skip(ctx, secrets, db))]
pub async fn uninstall<Ex>( pub async fn uninstall<Ex>(
ctx: &RpcContext, ctx: &RpcContext,
@@ -242,47 +347,35 @@ pub async fn uninstall<Ex>(
id: &PackageId, id: &PackageId,
) -> Result<(), Error> ) -> Result<(), Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Sqlite>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{ {
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
crate::db::DatabaseModel::new() let receipts = UninstallReceipts::new(&mut tx, id).await?;
.package_data() let entry = receipts.removing.get(&mut tx).await?;
.lock(&mut tx, LockType::Write)
.await?;
let entry = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|pde| pde.removing())
.get(&mut tx, true)
.await?
.into_owned()
.ok_or_else(|| {
Error::new(
eyre!("Package not in removing state: {}", id),
crate::ErrorKind::NotFound,
)
})?;
cleanup(ctx, &entry.manifest.id, &entry.manifest.version).await?; cleanup(ctx, &entry.manifest.id, &entry.manifest.version).await?;
crate::db::DatabaseModel::new() let packages = {
.package_data() let mut packages = receipts.packages.get(&mut tx).await?;
.remove(&mut tx, id) packages.0.remove(id);
.await?; packages
};
receipts.packages.set(&mut tx, packages).await?;
// once we have removed the package entry, we can change all the dependent pointers to null // once we have removed the package entry, we can change all the dependent pointers to null
reconfigure_dependents_with_live_pointers(ctx, &mut tx, &entry).await?; reconfigure_dependents_with_live_pointers(ctx, &mut tx, &receipts.config, &entry).await?;
remove_from_current_dependents_lists( remove_from_current_dependents_lists(
&mut tx, &mut tx,
&entry.manifest.id, &entry.manifest.id,
entry.current_dependencies.keys(), &entry.current_dependencies,
&receipts.current_dependents,
) )
.await?; .await?;
update_dependency_errors_of_dependents( update_dependency_errors_of_dependents(
ctx, ctx,
&mut tx, &mut tx,
&entry.manifest.id, &entry.manifest.id,
entry.current_dependents.keys(), &entry.current_dependents,
&receipts.update_depenency_receipts,
) )
.await?; .await?;
let volumes = ctx let volumes = ctx
@@ -292,7 +385,7 @@ where
if tokio::fs::metadata(&volumes).await.is_ok() { if tokio::fs::metadata(&volumes).await.is_ok() {
tokio::fs::remove_dir_all(&volumes).await?; tokio::fs::remove_dir_all(&volumes).await?;
} }
tx.commit(None).await?; tx.commit().await?;
remove_tor_keys(secrets, &entry.manifest.id).await?; remove_tor_keys(secrets, &entry.manifest.id).await?;
Ok(()) Ok(())
} }
@@ -300,10 +393,10 @@ where
#[instrument(skip(secrets))] #[instrument(skip(secrets))]
pub async fn remove_tor_keys<Ex>(secrets: &mut Ex, id: &PackageId) -> Result<(), Error> pub async fn remove_tor_keys<Ex>(secrets: &mut Ex, id: &PackageId) -> Result<(), Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Sqlite>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{ {
let id_str = id.as_str(); let id_str = id.as_str();
sqlx::query!("DELETE FROM tor WHERE package = ?", id_str) sqlx::query!("DELETE FROM tor WHERE package = $1", id_str)
.execute(secrets) .execute(secrets)
.await?; .await?;
Ok(()) Ok(())

View File

@@ -1,11 +1,11 @@
use std::collections::{BTreeMap, BTreeSet}; use std::collections::BTreeMap;
use std::io::SeekFrom; use std::io::SeekFrom;
use std::marker::PhantomData; use std::marker::PhantomData;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::process::Stdio; use std::process::Stdio;
use std::sync::atomic::Ordering; use std::sync::atomic::Ordering;
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, Instant}; use std::time::Duration;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use emver::VersionRange; use emver::VersionRange;
@@ -14,10 +14,10 @@ use futures::{FutureExt, StreamExt, TryStreamExt};
use http::header::CONTENT_LENGTH; use http::header::CONTENT_LENGTH;
use http::{Request, Response, StatusCode}; use http::{Request, Response, StatusCode};
use hyper::Body; use hyper::Body;
use patch_db::{DbHandle, LockType}; use patch_db::{DbHandle, LockReceipt, LockType};
use reqwest::Url; use reqwest::Url;
use rpc_toolkit::command;
use rpc_toolkit::yajrc::RpcError; use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, Context};
use tokio::fs::{File, OpenOptions}; use tokio::fs::{File, OpenOptions};
use tokio::io::{AsyncRead, AsyncSeek, AsyncSeekExt}; use tokio::io::{AsyncRead, AsyncSeek, AsyncSeekExt};
use tokio::process::Command; use tokio::process::Command;
@@ -25,16 +25,17 @@ use tokio_stream::wrappers::ReadDirStream;
use tracing::instrument; use tracing::instrument;
use self::cleanup::{cleanup_failed, remove_from_current_dependents_lists}; use self::cleanup::{cleanup_failed, remove_from_current_dependents_lists};
use crate::config::ConfigReceipts;
use crate::context::{CliContext, RpcContext}; use crate::context::{CliContext, RpcContext};
use crate::core::rpc_continuations::{RequestGuid, RpcContinuation}; use crate::core::rpc_continuations::{RequestGuid, RpcContinuation};
use crate::db::model::{ use crate::db::model::{
CurrentDependencyInfo, InstalledPackageDataEntry, PackageDataEntry, RecoveredPackageInfo, CurrentDependencies, CurrentDependencyInfo, CurrentDependents, InstalledPackageDataEntry,
StaticDependencyInfo, StaticFiles, PackageDataEntry, RecoveredPackageInfo, StaticDependencyInfo, StaticFiles,
}; };
use crate::db::util::WithRevision;
use crate::dependencies::{ use crate::dependencies::{
add_dependent_to_current_dependents_lists, break_all_dependents_transitive, add_dependent_to_current_dependents_lists, break_all_dependents_transitive,
reconfigure_dependents_with_live_pointers, BreakageRes, DependencyError, DependencyErrors, reconfigure_dependents_with_live_pointers, BreakTransitiveReceipts, BreakageRes,
DependencyError, DependencyErrors,
}; };
use crate::install::cleanup::{cleanup, update_dependency_errors_of_dependents}; use crate::install::cleanup::{cleanup, update_dependency_errors_of_dependents};
use crate::install::progress::{InstallProgress, InstallProgressTracker}; use crate::install::progress::{InstallProgress, InstallProgressTracker};
@@ -43,20 +44,20 @@ use crate::s9pk::manifest::{Manifest, PackageId};
use crate::s9pk::reader::S9pkReader; use crate::s9pk::reader::S9pkReader;
use crate::status::{MainStatus, Status}; use crate::status::{MainStatus, Status};
use crate::util::io::{copy_and_shutdown, response_to_reader}; use crate::util::io::{copy_and_shutdown, response_to_reader};
use crate::util::serde::{display_serializable, IoFormat, Port}; use crate::util::serde::{display_serializable, Port};
use crate::util::{display_none, AsyncFileExt, Version}; use crate::util::{display_none, AsyncFileExt, Version};
use crate::version::{Current, VersionT}; use crate::version::{Current, VersionT};
use crate::volume::asset_dir; use crate::volume::{asset_dir, script_dir};
use crate::{Error, ErrorKind, ResultExt}; use crate::{Error, ErrorKind, ResultExt};
pub mod cleanup; pub mod cleanup;
pub mod progress; pub mod progress;
pub mod update; pub mod update;
pub const PKG_ARCHIVE_DIR: &'static str = "package-data/archive"; pub const PKG_ARCHIVE_DIR: &str = "package-data/archive";
pub const PKG_PUBLIC_DIR: &'static str = "package-data/public"; pub const PKG_PUBLIC_DIR: &str = "package-data/public";
pub const PKG_DOCKER_DIR: &'static str = "package-data/docker"; pub const PKG_DOCKER_DIR: &str = "package-data/docker";
pub const PKG_WASM_DIR: &'static str = "package-data/wasm"; pub const PKG_WASM_DIR: &str = "package-data/wasm";
#[command(display(display_serializable))] #[command(display(display_serializable))]
pub async fn list(#[context] ctx: RpcContext) -> Result<Vec<(PackageId, Version)>, Error> { pub async fn list(#[context] ctx: RpcContext) -> Result<Vec<(PackageId, Version)>, Error> {
@@ -113,19 +114,20 @@ impl std::fmt::Display for MinMax {
#[command( #[command(
custom_cli(cli_install(async, context(CliContext))), custom_cli(cli_install(async, context(CliContext))),
display(display_none) display(display_none),
metadata(sync_db = true)
)] )]
#[instrument(skip(ctx))] #[instrument(skip(ctx))]
pub async fn install( pub async fn install(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg] id: String, #[arg] id: String,
#[arg(short = "m", long = "marketplace-url", rename = "marketplace-url")] #[arg(short = 'm', long = "marketplace-url", rename = "marketplace-url")]
marketplace_url: Option<Url>, marketplace_url: Option<Url>,
#[arg(short = "v", long = "version-spec", rename = "version-spec")] version_spec: Option< #[arg(short = 'v', long = "version-spec", rename = "version-spec")] version_spec: Option<
String, String,
>, >,
#[arg(long = "version-priority", rename = "version-priority")] version_priority: Option<MinMax>, #[arg(long = "version-priority", rename = "version-priority")] version_priority: Option<MinMax>,
) -> Result<WithRevision<()>, Error> { ) -> Result<(), Error> {
let version_str = match &version_spec { let version_str = match &version_spec {
None => "*", None => "*",
Some(v) => &*v, Some(v) => &*v,
@@ -141,7 +143,7 @@ pub async fn install(
version, version,
version_priority, version_priority,
Current::new().compat(), Current::new().compat(),
platforms::TARGET_ARCH, &*crate::ARCH,
)) ))
.await .await
.with_kind(crate::ErrorKind::Registry)? .with_kind(crate::ErrorKind::Registry)?
@@ -157,7 +159,7 @@ pub async fn install(
man.version, man.version,
version_priority, version_priority,
Current::new().compat(), Current::new().compat(),
platforms::TARGET_ARCH, &*crate::ARCH,
)) ))
.await .await
.with_kind(crate::ErrorKind::Registry)? .with_kind(crate::ErrorKind::Registry)?
@@ -189,9 +191,10 @@ pub async fn install(
id, id,
man.version, man.version,
Current::new().compat(), Current::new().compat(),
platforms::TARGET_ARCH, &*crate::ARCH,
)) ))
.await?, .await?
.error_for_status()?,
), ),
&mut File::create(public_dir_path.join("LICENSE.md")).await?, &mut File::create(public_dir_path.join("LICENSE.md")).await?,
) )
@@ -207,9 +210,10 @@ pub async fn install(
id, id,
man.version, man.version,
Current::new().compat(), Current::new().compat(),
platforms::TARGET_ARCH, &*crate::ARCH,
)) ))
.await?, .await?
.error_for_status()?,
), ),
&mut File::create(public_dir_path.join("INSTRUCTIONS.md")).await?, &mut File::create(public_dir_path.join("INSTRUCTIONS.md")).await?,
) )
@@ -225,9 +229,10 @@ pub async fn install(
id, id,
man.version, man.version,
Current::new().compat(), Current::new().compat(),
platforms::TARGET_ARCH, &*crate::ARCH,
)) ))
.await?, .await?
.error_for_status()?,
), ),
&mut File::create(public_dir_path.join(format!("icon.{}", icon_type))).await?, &mut File::create(public_dir_path.join(format!("icon.{}", icon_type))).await?,
) )
@@ -282,7 +287,7 @@ pub async fn install(
} }
} }
pde.save(&mut tx).await?; pde.save(&mut tx).await?;
let res = tx.commit(None).await?; tx.commit().await?;
drop(db_handle); drop(db_handle);
tokio::spawn(async move { tokio::spawn(async move {
@@ -318,10 +323,7 @@ pub async fn install(
} }
}); });
Ok(WithRevision { Ok(())
revision: res,
response: (),
})
} }
#[command(rpc_only, display(display_none))] #[command(rpc_only, display(display_none))]
@@ -329,9 +331,37 @@ pub async fn install(
pub async fn sideload( pub async fn sideload(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg] manifest: Manifest, #[arg] manifest: Manifest,
#[arg] icon: Option<String>,
) -> Result<RequestGuid, Error> { ) -> Result<RequestGuid, Error> {
let new_ctx = ctx.clone(); let new_ctx = ctx.clone();
let guid = RequestGuid::new(); let guid = RequestGuid::new();
if let Some(icon) = icon {
use tokio::io::AsyncWriteExt;
let public_dir_path = ctx
.datadir
.join(PKG_PUBLIC_DIR)
.join(&manifest.id)
.join(manifest.version.as_str());
tokio::fs::create_dir_all(&public_dir_path).await?;
let invalid_data_url =
|| Error::new(eyre!("Invalid Icon Data URL"), ErrorKind::InvalidRequest);
let data = icon
.strip_prefix(&format!(
"data:image/{};base64,",
manifest.assets.icon_type()
))
.ok_or_else(&invalid_data_url)?;
let mut icon_file =
File::create(public_dir_path.join(format!("icon.{}", manifest.assets.icon_type())))
.await?;
icon_file
.write_all(&base64::decode(data).with_kind(ErrorKind::InvalidRequest)?)
.await?;
icon_file.sync_all().await?;
}
let handler = Box::new(|req: Request<Body>| { let handler = Box::new(|req: Request<Body>| {
async move { async move {
let content_length = match req.headers().get(CONTENT_LENGTH).map(|a| a.to_str()) { let content_length = match req.headers().get(CONTENT_LENGTH).map(|a| a.to_str()) {
@@ -394,7 +424,7 @@ pub async fn sideload(
} }
} }
pde.save(&mut tx).await?; pde.save(&mut tx).await?;
tx.commit(None).await?; tx.commit().await?;
if let Err(e) = download_install_s9pk( if let Err(e) = download_install_s9pk(
&new_ctx, &new_ctx,
@@ -445,23 +475,11 @@ pub async fn sideload(
} }
.boxed() .boxed()
}); });
let cont = RpcContinuation { ctx.add_continuation(
created_at: Instant::now(), // TODO guid.clone(),
handler: handler, RpcContinuation::rest(handler, Duration::from_secs(30)),
}; )
// gc the map .await;
let mut guard = ctx.rpc_stream_continuations.lock().await;
let garbage_collected = std::mem::take(&mut *guard)
.into_iter()
.filter(|(_, v)| v.created_at.elapsed() < Duration::from_secs(30))
.collect::<BTreeMap<RequestGuid, RpcContinuation>>();
*guard = garbage_collected;
drop(guard);
// insert the new continuation
ctx.rpc_stream_continuations
.lock()
.await
.insert(guid.clone(), cont);
Ok(guid) Ok(guid)
} }
@@ -477,14 +495,21 @@ async fn cli_install(
let path = PathBuf::from(target); let path = PathBuf::from(target);
// inspect manifest no verify // inspect manifest no verify
let manifest = crate::inspect::manifest(path.clone(), true, Some(IoFormat::Json)).await?; let mut reader = S9pkReader::open(&path, false).await?;
let manifest = reader.manifest().await?;
let icon = reader.icon().await?.to_vec().await?;
let icon_str = format!(
"data:image/{};base64,{}",
manifest.assets.icon_type(),
base64::encode(&icon)
);
// rpc call remote sideload // rpc call remote sideload
tracing::debug!("calling package.sideload"); tracing::debug!("calling package.sideload");
let guid = rpc_toolkit::command_helpers::call_remote( let guid = rpc_toolkit::command_helpers::call_remote(
ctx.clone(), ctx.clone(),
"package.sideload", "package.sideload",
serde_json::json!({ "manifest": manifest }), serde_json::json!({ "manifest": manifest, "icon": icon_str }),
PhantomData::<RequestGuid>, PhantomData::<RequestGuid>,
) )
.await? .await?
@@ -495,14 +520,9 @@ async fn cli_install(
let file = tokio::fs::File::open(path).await?; let file = tokio::fs::File::open(path).await?;
let content_length = file.metadata().await?.len(); let content_length = file.metadata().await?.len();
let body = Body::wrap_stream(tokio_util::io::ReaderStream::new(file)); let body = Body::wrap_stream(tokio_util::io::ReaderStream::new(file));
let client = reqwest::Client::new(); let res = ctx
let res = client .client
.post(format!( .post(format!("{}rest/rpc/{}", ctx.base_url, guid,))
"{}://{}/rest/rpc/{}",
ctx.protocol(),
ctx.host(),
guid
))
.header(CONTENT_LENGTH, content_length) .header(CONTENT_LENGTH, content_length)
.body(body) .body(body)
.send() .send()
@@ -536,7 +556,7 @@ async fn cli_install(
ctx, ctx,
"package.install", "package.install",
params, params,
PhantomData::<WithRevision<()>>, PhantomData::<()>,
) )
.await? .await?
.result?; .result?;
@@ -547,7 +567,8 @@ async fn cli_install(
#[command( #[command(
subcommands(self(uninstall_impl(async)), uninstall_dry), subcommands(self(uninstall_impl(async)), uninstall_dry),
display(display_none) display(display_none),
metadata(sync_db = true)
)] )]
pub async fn uninstall(#[arg] id: PackageId) -> Result<PackageId, Error> { pub async fn uninstall(#[arg] id: PackageId) -> Result<PackageId, Error> {
Ok(id) Ok(id)
@@ -562,8 +583,15 @@ pub async fn uninstall_dry(
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let mut breakages = BTreeMap::new(); let mut breakages = BTreeMap::new();
break_all_dependents_transitive(&mut tx, &id, DependencyError::NotInstalled, &mut breakages) let receipts = BreakTransitiveReceipts::new(&mut tx).await?;
.await?; break_all_dependents_transitive(
&mut tx,
&id,
DependencyError::NotInstalled,
&mut breakages,
&receipts,
)
.await?;
tx.abort().await?; tx.abort().await?;
@@ -571,7 +599,7 @@ pub async fn uninstall_dry(
} }
#[instrument(skip(ctx))] #[instrument(skip(ctx))]
pub async fn uninstall_impl(ctx: RpcContext, id: PackageId) -> Result<WithRevision<()>, Error> { pub async fn uninstall_impl(ctx: RpcContext, id: PackageId) -> Result<(), Error> {
let mut handle = ctx.db.handle(); let mut handle = ctx.db.handle();
let mut tx = handle.begin().await?; let mut tx = handle.begin().await?;
@@ -599,7 +627,7 @@ pub async fn uninstall_impl(ctx: RpcContext, id: PackageId) -> Result<WithRevisi
removing: installed, removing: installed,
}); });
pde.save(&mut tx).await?; pde.save(&mut tx).await?;
let res = tx.commit(None).await?; tx.commit().await?;
drop(handle); drop(handle);
tokio::spawn(async move { tokio::spawn(async move {
@@ -636,17 +664,18 @@ pub async fn uninstall_impl(ctx: RpcContext, id: PackageId) -> Result<WithRevisi
} }
}); });
Ok(WithRevision { Ok(())
revision: res,
response: (),
})
} }
#[command(rename = "delete-recovered", display(display_none))] #[command(
rename = "delete-recovered",
display(display_none),
metadata(sync_db = true)
)]
pub async fn delete_recovered( pub async fn delete_recovered(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
#[arg] id: PackageId, #[arg] id: PackageId,
) -> Result<WithRevision<()>, Error> { ) -> Result<(), Error> {
let mut handle = ctx.db.handle(); let mut handle = ctx.db.handle();
let mut tx = handle.begin().await?; let mut tx = handle.begin().await?;
let mut sql_tx = ctx.secret_store.begin().await?; let mut sql_tx = ctx.secret_store.begin().await?;
@@ -669,13 +698,39 @@ pub async fn delete_recovered(
} }
cleanup::remove_tor_keys(&mut sql_tx, &id).await?; cleanup::remove_tor_keys(&mut sql_tx, &id).await?;
let res = tx.commit(None).await?; tx.commit().await?;
sql_tx.commit().await?; sql_tx.commit().await?;
Ok(WithRevision { Ok(())
revision: res, }
response: (),
}) pub struct DownloadInstallReceipts {
package_receipts: crate::db::package::PackageReceipts,
manifest_receipts: crate::db::package::ManifestReceipts,
}
impl DownloadInstallReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle, id: &PackageId) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, id);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
id: &PackageId,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let package_receipts = crate::db::package::PackageReceipts::setup(locks);
let manifest_receipts = crate::db::package::ManifestReceipts::setup(locks, id);
move |skeleton_key| {
Ok(Self {
package_receipts: package_receipts(skeleton_key)?,
manifest_receipts: manifest_receipts(skeleton_key)?,
})
}
}
} }
#[instrument(skip(ctx, temp_manifest, s9pk))] #[instrument(skip(ctx, temp_manifest, s9pk))]
@@ -692,14 +747,14 @@ pub async fn download_install_s9pk(
if let Err(e) = async { if let Err(e) = async {
let mut db_handle = ctx.db.handle(); let mut db_handle = ctx.db.handle();
let mut tx = db_handle.begin().await?; let mut tx = db_handle.begin().await?;
let receipts = DownloadInstallReceipts::new(&mut tx, &pkg_id).await?;
// Build set of existing manifests // Build set of existing manifests
let mut manifests = Vec::new(); let mut manifests = Vec::new();
for pkg in crate::db::package::get_packages(&mut tx).await? { for pkg in crate::db::package::get_packages(&mut tx, &receipts.package_receipts).await? {
match crate::db::package::get_manifest(&mut tx, &pkg).await? { if let Some(m) =
Some(m) => { crate::db::package::get_manifest(&mut tx, &pkg, &receipts.manifest_receipts).await?
manifests.push(m); {
} manifests.push(m);
None => {}
} }
} }
// Build map of current port -> ssl mappings // Build map of current port -> ssl mappings
@@ -712,7 +767,7 @@ pub async fn download_install_s9pk(
for (p, lan) in cfg { for (p, lan) in cfg {
if p.0 == 80 && lan.ssl || p.0 == 443 && !lan.ssl { if p.0 == 80 && lan.ssl || p.0 == 443 && !lan.ssl {
return Err(Error::new( return Err(Error::new(
eyre!("SSL Conflict with EmbassyOS"), eyre!("SSL Conflict with embassyOS"),
ErrorKind::LanPortConflict, ErrorKind::LanPortConflict,
)); ));
} }
@@ -732,6 +787,7 @@ pub async fn download_install_s9pk(
} }
} }
} }
drop(receipts);
tx.save().await?; tx.save().await?;
drop(db_handle); drop(db_handle);
@@ -792,12 +848,13 @@ pub async fn download_install_s9pk(
{ {
let mut handle = ctx.db.handle(); let mut handle = ctx.db.handle();
let mut tx = handle.begin().await?; let mut tx = handle.begin().await?;
let receipts = cleanup::CleanupFailedReceipts::new(&mut tx).await?;
if let Err(e) = cleanup_failed(&ctx, &mut tx, pkg_id).await { if let Err(e) = cleanup_failed(&ctx, &mut tx, pkg_id, &receipts).await {
tracing::error!("Failed to clean up {}@{}: {}", pkg_id, version, e); tracing::error!("Failed to clean up {}@{}: {}", pkg_id, version, e);
tracing::debug!("{:?}", e); tracing::debug!("{:?}", e);
} else { } else {
tx.commit(None).await?; tx.commit().await?;
} }
Err(e) Err(e)
} else { } else {
@@ -805,6 +862,39 @@ pub async fn download_install_s9pk(
} }
} }
pub struct InstallS9Receipts {
config: ConfigReceipts,
recovered_packages: LockReceipt<BTreeMap<PackageId, RecoveredPackageInfo>, ()>,
}
impl InstallS9Receipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let config = ConfigReceipts::setup(locks);
let recovered_packages = crate::db::DatabaseModel::new()
.recovered_packages()
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
config: config(skeleton_key)?,
recovered_packages: recovered_packages.verify(skeleton_key)?,
})
}
}
}
#[instrument(skip(ctx, rdr))] #[instrument(skip(ctx, rdr))]
pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>( pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
ctx: &RpcContext, ctx: &RpcContext,
@@ -848,7 +938,7 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
dep, dep,
info.version, info.version,
Current::new().compat(), Current::new().compat(),
platforms::TARGET_ARCH, &*crate::ARCH,
)) ))
.await .await
.with_kind(crate::ErrorKind::Registry)? .with_kind(crate::ErrorKind::Registry)?
@@ -883,7 +973,7 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
dep, dep,
info.version, info.version,
Current::new().compat(), Current::new().compat(),
platforms::TARGET_ARCH, &*crate::ARCH,
)) ))
.await .await
.with_kind(crate::ErrorKind::Registry)?; .with_kind(crate::ErrorKind::Registry)?;
@@ -1046,6 +1136,18 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
let mut tar = tokio_tar::Archive::new(rdr.assets().await?); let mut tar = tokio_tar::Archive::new(rdr.assets().await?);
tar.unpack(asset_dir).await?; tar.unpack(asset_dir).await?;
let script_dir = script_dir(&ctx.datadir, pkg_id, version);
if tokio::fs::metadata(&script_dir).await.is_err() {
tokio::fs::create_dir_all(&script_dir).await?;
}
if let Some(mut hdl) = rdr.scripts().await? {
tokio::io::copy(
&mut hdl,
&mut File::create(script_dir.join("embassy.js")).await?,
)
.await?;
}
Ok(()) Ok(())
}) })
.await?; .await?;
@@ -1082,18 +1184,20 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
tracing::info!("Install {}@{}: Created manager", pkg_id, version); tracing::info!("Install {}@{}: Created manager", pkg_id, version);
let static_files = StaticFiles::local(pkg_id, version, manifest.assets.icon_type()); let static_files = StaticFiles::local(pkg_id, version, manifest.assets.icon_type());
let current_dependencies: BTreeMap<_, _> = manifest let current_dependencies: CurrentDependencies = CurrentDependencies(
.dependencies manifest
.0 .dependencies
.iter() .0
.filter_map(|(id, info)| { .iter()
if info.requirement.required() { .filter_map(|(id, info)| {
Some((id.clone(), CurrentDependencyInfo::default())) if info.requirement.required() {
} else { Some((id.clone(), CurrentDependencyInfo::default()))
None } else {
} None
}) }
.collect(); })
.collect(),
);
let current_dependents = { let current_dependents = {
let mut deps = BTreeMap::new(); let mut deps = BTreeMap::new();
for package in crate::db::DatabaseModel::new() for package in crate::db::DatabaseModel::new()
@@ -1139,7 +1243,7 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
deps.insert(package, dep); deps.insert(package, dep);
} }
} }
deps CurrentDependents(deps)
}; };
let mut pde = model let mut pde = model
.clone() .clone()
@@ -1183,6 +1287,8 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
}, },
); );
pde.save(&mut tx).await?; pde.save(&mut tx).await?;
let receipts = InstallS9Receipts::new(&mut tx).await?;
// UpdateDependencyReceipts
let mut dep_errs = model let mut dep_errs = model
.expect(&mut tx) .expect(&mut tx)
.await? .await?
@@ -1193,7 +1299,14 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
.dependency_errors() .dependency_errors()
.get_mut(&mut tx) .get_mut(&mut tx)
.await?; .await?;
*dep_errs = DependencyErrors::init(ctx, &mut tx, &manifest, &current_dependencies).await?; *dep_errs = DependencyErrors::init(
ctx,
&mut tx,
&manifest,
&current_dependencies,
&receipts.config.try_heal_receipts,
)
.await?;
dep_errs.save(&mut tx).await?; dep_errs.save(&mut tx).await?;
if let PackageDataEntry::Updating { if let PackageDataEntry::Updating {
@@ -1244,8 +1357,26 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
false, false,
&mut BTreeMap::new(), &mut BTreeMap::new(),
&mut BTreeMap::new(), &mut BTreeMap::new(),
&receipts.config,
) )
.await?; .await?;
} else {
remove_from_current_dependents_lists(
&mut tx,
pkg_id,
&prev.current_dependencies,
&receipts.config.current_dependents,
)
.await?; // remove previous
add_dependent_to_current_dependents_lists(
&mut tx,
pkg_id,
&current_dependencies,
&receipts.config.current_dependents,
)
.await?; // add new
}
if configured || manifest.config.is_none() {
let mut main_status = crate::db::DatabaseModel::new() let mut main_status = crate::db::DatabaseModel::new()
.package_data() .package_data()
.idx_model(pkg_id) .idx_model(pkg_id)
@@ -1261,17 +1392,16 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
*main_status = prev.status.main; *main_status = prev.status.main;
main_status.save(&mut tx).await?; main_status.save(&mut tx).await?;
} }
remove_from_current_dependents_lists(&mut tx, pkg_id, prev.current_dependencies.keys())
.await?; // remove previous
add_dependent_to_current_dependents_lists(&mut tx, pkg_id, &current_dependencies).await?; // add new
update_dependency_errors_of_dependents( update_dependency_errors_of_dependents(
ctx, ctx,
&mut tx, &mut tx,
pkg_id, pkg_id,
current_dependents &CurrentDependents({
.keys() let mut current_dependents = current_dependents.0.clone();
.chain(prev.current_dependents.keys()) current_dependents.append(&mut prev.current_dependents.0.clone());
.collect::<BTreeSet<_>>(), current_dependents
}),
&receipts.config.update_dependency_receipts,
) )
.await?; .await?;
if &prev.manifest.version != version { if &prev.manifest.version != version {
@@ -1290,50 +1420,95 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin>(
&manifest.volumes, &manifest.volumes,
) )
.await?; .await?;
add_dependent_to_current_dependents_lists(&mut tx, pkg_id, &current_dependencies).await?; add_dependent_to_current_dependents_lists(
update_dependency_errors_of_dependents(ctx, &mut tx, pkg_id, current_dependents.keys()) &mut tx,
.await?; pkg_id,
&current_dependencies,
&receipts.config.current_dependents,
)
.await?;
update_dependency_errors_of_dependents(
ctx,
&mut tx,
pkg_id,
&current_dependents,
&receipts.config.update_dependency_receipts,
)
.await?;
} else if let Some(recovered) = { } else if let Some(recovered) = {
// solve taxonomy escalation receipts
crate::db::DatabaseModel::new() .recovered_packages
.recovered_packages() .get(&mut tx)
.lock(&mut tx, LockType::Write)
.await?;
crate::db::DatabaseModel::new()
.recovered_packages()
.idx_model(pkg_id)
.get(&mut tx, true)
.await? .await?
.into_owned() .remove(pkg_id)
} { } {
handle_recovered_package(recovered, manifest, ctx, pkg_id, version, &mut tx).await?; handle_recovered_package(
add_dependent_to_current_dependents_lists(&mut tx, pkg_id, &current_dependencies).await?; recovered,
update_dependency_errors_of_dependents(ctx, &mut tx, pkg_id, current_dependents.keys()) manifest,
.await?; ctx,
pkg_id,
version,
&mut tx,
&receipts.config,
)
.await?;
add_dependent_to_current_dependents_lists(
&mut tx,
pkg_id,
&current_dependencies,
&receipts.config.current_dependents,
)
.await?;
update_dependency_errors_of_dependents(
ctx,
&mut tx,
pkg_id,
&current_dependents,
&receipts.config.update_dependency_receipts,
)
.await?;
} else { } else {
add_dependent_to_current_dependents_lists(&mut tx, pkg_id, &current_dependencies).await?; add_dependent_to_current_dependents_lists(
update_dependency_errors_of_dependents(ctx, &mut tx, pkg_id, current_dependents.keys()) &mut tx,
.await?; pkg_id,
&current_dependencies,
&receipts.config.current_dependents,
)
.await?;
update_dependency_errors_of_dependents(
ctx,
&mut tx,
pkg_id,
&current_dependents,
&receipts.config.update_dependency_receipts,
)
.await?;
} }
crate::db::DatabaseModel::new() let recovered_packages = {
.recovered_packages() let mut r = receipts.recovered_packages.get(&mut tx).await?;
.remove(&mut tx, pkg_id) r.remove(pkg_id);
r
};
receipts
.recovered_packages
.set(&mut tx, recovered_packages)
.await?; .await?;
if let Some(installed) = pde.installed() { if let Some(installed) = pde.installed() {
reconfigure_dependents_with_live_pointers(ctx, &mut tx, installed).await?; reconfigure_dependents_with_live_pointers(ctx, &mut tx, &receipts.config, installed)
.await?;
} }
sql_tx.commit().await?; sql_tx.commit().await?;
tx.commit(None).await?; tx.commit().await?;
tracing::info!("Install {}@{}: Complete", pkg_id, version); tracing::info!("Install {}@{}: Complete", pkg_id, version);
Ok(()) Ok(())
} }
#[instrument(skip(ctx, tx))] #[instrument(skip(ctx, tx, receipts))]
async fn handle_recovered_package( async fn handle_recovered_package(
recovered: RecoveredPackageInfo, recovered: RecoveredPackageInfo,
manifest: Manifest, manifest: Manifest,
@@ -1341,6 +1516,7 @@ async fn handle_recovered_package(
pkg_id: &PackageId, pkg_id: &PackageId,
version: &Version, version: &Version,
tx: &mut patch_db::Transaction<&mut patch_db::PatchDbHandle>, tx: &mut patch_db::Transaction<&mut patch_db::PatchDbHandle>,
receipts: &ConfigReceipts,
) -> Result<(), Error> { ) -> Result<(), Error> {
let configured = if let Some(migration) = let configured = if let Some(migration) =
manifest manifest
@@ -1361,6 +1537,7 @@ async fn handle_recovered_package(
false, false,
&mut BTreeMap::new(), &mut BTreeMap::new(),
&mut BTreeMap::new(), &mut BTreeMap::new(),
&receipts,
) )
.await?; .await?;
} }

View File

@@ -44,10 +44,14 @@ impl InstallProgress {
mut db: Db, mut db: Db,
) -> Result<(), Error> { ) -> Result<(), Error> {
while !self.download_complete.load(Ordering::SeqCst) { while !self.download_complete.load(Ordering::SeqCst) {
model.put(&mut db, &self).await?; let mut tx = db.begin().await?;
model.put(&mut tx, &self).await?;
tx.save().await?;
tokio::time::sleep(Duration::from_secs(1)).await; tokio::time::sleep(Duration::from_secs(1)).await;
} }
model.put(&mut db, &self).await?; let mut tx = db.begin().await?;
model.put(&mut tx, &self).await?;
tx.save().await?;
Ok(()) Ok(())
} }
pub async fn track_download_during< pub async fn track_download_during<
@@ -74,10 +78,14 @@ impl InstallProgress {
complete: Arc<AtomicBool>, complete: Arc<AtomicBool>,
) -> Result<(), Error> { ) -> Result<(), Error> {
while !complete.load(Ordering::SeqCst) { while !complete.load(Ordering::SeqCst) {
model.put(&mut db, &self).await?; let mut tx = db.begin().await?;
model.put(&mut tx, &self).await?;
tx.save().await?;
tokio::time::sleep(Duration::from_secs(1)).await; tokio::time::sleep(Duration::from_secs(1)).await;
} }
model.put(&mut db, &self).await?; let mut tx = db.begin().await?;
model.put(&mut tx, &self).await?;
tx.save().await?;
Ok(()) Ok(())
} }
pub async fn track_read_during< pub async fn track_read_during<

View File

@@ -1,20 +1,66 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use patch_db::{DbHandle, LockType}; use patch_db::{DbHandle, LockReceipt, LockTargetId, LockType, Verifier};
use rpc_toolkit::command; use rpc_toolkit::command;
use tracing::instrument;
use crate::config::not_found;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::dependencies::{break_transitive, BreakageRes, DependencyError}; use crate::db::model::CurrentDependents;
use crate::dependencies::{
break_transitive, BreakTransitiveReceipts, BreakageRes, DependencyError,
};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::util::serde::display_serializable; use crate::util::serde::display_serializable;
use crate::util::Version; use crate::util::Version;
use crate::Error; use crate::Error;
pub struct UpdateReceipts {
break_receipts: BreakTransitiveReceipts,
current_dependents: LockReceipt<CurrentDependents, String>,
dependency: LockReceipt<crate::dependencies::DepInfo, (String, String)>,
}
impl UpdateReceipts {
pub async fn new<'a>(db: &'a mut impl DbHandle) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks);
Ok(setup(&db.lock_all(locks).await?)?)
}
pub fn setup(locks: &mut Vec<LockTargetId>) -> impl FnOnce(&Verifier) -> Result<Self, Error> {
let break_receipts = BreakTransitiveReceipts::setup(locks);
let current_dependents = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.current_dependents())
.make_locker(LockType::Write)
.add_to_keys(locks);
let dependency = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.map(|x| x.manifest().dependencies().star())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
break_receipts: break_receipts(skeleton_key)?,
current_dependents: current_dependents.verify(skeleton_key)?,
dependency: dependency.verify(skeleton_key)?,
})
}
}
}
#[command(subcommands(dry))] #[command(subcommands(dry))]
pub async fn update() -> Result<(), Error> { pub async fn update() -> Result<(), Error> {
Ok(()) Ok(())
} }
#[instrument(skip(ctx))]
#[command(display(display_serializable))] #[command(display(display_serializable))]
pub async fn dry( pub async fn dry(
#[context] ctx: RpcContext, #[context] ctx: RpcContext,
@@ -24,49 +70,34 @@ pub async fn dry(
let mut db = ctx.db.handle(); let mut db = ctx.db.handle();
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let mut breakages = BTreeMap::new(); let mut breakages = BTreeMap::new();
crate::db::DatabaseModel::new() let receipts = UpdateReceipts::new(&mut tx).await?;
.package_data()
.lock(&mut tx, LockType::Read) for dependent in receipts
.await?; .current_dependents
for dependent in crate::db::DatabaseModel::new() .get(&mut tx, &id)
.package_data()
.idx_model(&id)
.and_then(|m| m.installed())
.expect(&mut tx)
.await?
.current_dependents()
.keys(&mut tx, true)
.await? .await?
.ok_or_else(not_found)?
.0
.keys()
.into_iter() .into_iter()
.filter(|dependent| &id != dependent) .filter(|dependent| &&id != dependent)
{ {
let version_req = crate::db::DatabaseModel::new() if let Some(dep_info) = receipts.dependency.get(&mut tx, (&dependent, &id)).await? {
.package_data() let version_req = dep_info.version;
.idx_model(&dependent) if !version.satisfies(&version_req) {
.and_then(|m| m.installed()) break_transitive(
.expect(&mut tx) &mut tx,
.await? &dependent,
.manifest() &id,
.dependencies() DependencyError::IncorrectVersion {
.idx_model(&id) expected: version_req,
.expect(&mut tx) received: version.clone(),
.await? },
.get(&mut tx, true) &mut breakages,
.await? &receipts.break_receipts,
.into_owned() )
.version; .await?;
if !version.satisfies(&version_req) { }
break_transitive(
&mut tx,
&dependent,
&id,
DependencyError::IncorrectVersion {
expected: version_req,
received: version.clone(),
},
&mut breakages,
)
.await?;
} }
} }
tx.abort().await?; tx.abort().await?;

View File

@@ -1,10 +1,13 @@
pub const CONFIG_PATH: &str = "/etc/embassy/config.yaml"; pub const DEFAULT_MARKETPLACE: &str = "https://registry.start9.com";
#[cfg(not(feature = "beta"))]
pub const DEFAULT_MARKETPLACE: &str = "https://marketplace.start9.com";
#[cfg(feature = "beta")]
pub const DEFAULT_MARKETPLACE: &str = "https://beta-registry-0-3.start9labs.com";
pub const BUFFER_SIZE: usize = 1024; pub const BUFFER_SIZE: usize = 1024;
pub const HOST_IP: [u8; 4] = [172, 18, 0, 1]; pub const HOST_IP: [u8; 4] = [172, 18, 0, 1];
pub const TARGET: &str = current_platform::CURRENT_PLATFORM;
lazy_static::lazy_static! {
pub static ref ARCH: &'static str = {
let (arch, _) = TARGET.split_once("-").unwrap();
arch
};
}
pub mod action; pub mod action;
pub mod auth; pub mod auth;
@@ -31,6 +34,7 @@ pub mod middleware;
pub mod migration; pub mod migration;
pub mod net; pub mod net;
pub mod notifications; pub mod notifications;
pub mod procedure;
pub mod properties; pub mod properties;
pub mod s9pk; pub mod s9pk;
pub mod setup; pub mod setup;
@@ -99,6 +103,7 @@ pub fn server() -> Result<(), RpcError> {
config::config, config::config,
control::start, control::start,
control::stop, control::stop,
control::restart,
logs::logs, logs::logs,
properties::properties, properties::properties,
dependencies::dependency, dependencies::dependency,

View File

@@ -1,22 +1,116 @@
use std::future::Future;
use std::marker::PhantomData;
use std::ops::{Deref, DerefMut};
use std::process::Stdio; use std::process::Stdio;
use std::time::{Duration, UNIX_EPOCH}; use std::time::{Duration, UNIX_EPOCH};
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use clap::ArgMatches;
use color_eyre::eyre::eyre; use color_eyre::eyre::eyre;
use futures::TryStreamExt; use futures::stream::BoxStream;
use futures::{FutureExt, SinkExt, Stream, StreamExt, TryStreamExt};
use hyper::upgrade::Upgraded;
use hyper::Error as HyperError;
use rpc_toolkit::command; use rpc_toolkit::command;
use rpc_toolkit::yajrc::RpcError;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::io::{AsyncBufReadExt, BufReader}; use tokio::io::{AsyncBufReadExt, BufReader};
use tokio::process::Command; use tokio::process::{Child, Command};
use tokio::task::JoinError;
use tokio_stream::wrappers::LinesStream; use tokio_stream::wrappers::LinesStream;
use tokio_tungstenite::tungstenite::protocol::frame::coding::CloseCode;
use tokio_tungstenite::tungstenite::protocol::CloseFrame;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::WebSocketStream;
use tracing::instrument; use tracing::instrument;
use crate::action::docker::DockerAction; use crate::context::{CliContext, RpcContext};
use crate::core::rpc_continuations::{RequestGuid, RpcContinuation};
use crate::error::ResultExt; use crate::error::ResultExt;
use crate::procedure::docker::DockerProcedure;
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::util::display_none;
use crate::util::serde::Reversible; use crate::util::serde::Reversible;
use crate::Error; use crate::{Error, ErrorKind};
#[pin_project::pin_project]
struct LogStream {
_child: Child,
#[pin]
entries: BoxStream<'static, Result<JournalctlEntry, Error>>,
}
impl Deref for LogStream {
type Target = BoxStream<'static, Result<JournalctlEntry, Error>>;
fn deref(&self) -> &Self::Target {
&self.entries
}
}
impl DerefMut for LogStream {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.entries
}
}
impl Stream for LogStream {
type Item = Result<JournalctlEntry, Error>;
fn poll_next(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Option<Self::Item>> {
let this = self.project();
Stream::poll_next(this.entries, cx)
}
fn size_hint(&self) -> (usize, Option<usize>) {
self.entries.size_hint()
}
}
#[instrument(skip(logs, ws_fut))]
async fn ws_handler<
WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
>(
first_entry: Option<LogEntry>,
mut logs: LogStream,
ws_fut: WSFut,
) -> Result<(), Error> {
let mut stream = ws_fut
.await
.with_kind(crate::ErrorKind::Network)?
.with_kind(crate::ErrorKind::Unknown)?;
if let Some(first_entry) = first_entry {
stream
.send(Message::Text(
serde_json::to_string(&first_entry).with_kind(ErrorKind::Serialization)?,
))
.await
.with_kind(ErrorKind::Network)?;
}
while let Some(entry) = tokio::select! {
a = logs.try_next() => Some(a?),
a = stream.try_next() => { a.with_kind(crate::ErrorKind::Network)?; None }
} {
if let Some(entry) = entry {
let (_, log_entry) = entry.log_entry()?;
stream
.send(Message::Text(
serde_json::to_string(&log_entry).with_kind(ErrorKind::Serialization)?,
))
.await
.with_kind(ErrorKind::Network)?;
}
}
stream
.close(Some(CloseFrame {
code: CloseCode::Normal,
reason: "Log Stream Finished".into(),
}))
.await
.with_kind(ErrorKind::Network)?;
Ok(())
}
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)] #[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
@@ -25,6 +119,12 @@ pub struct LogResponse {
start_cursor: Option<String>, start_cursor: Option<String>,
end_cursor: Option<String>, end_cursor: Option<String>,
} }
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
#[serde(rename_all = "kebab-case")]
pub struct LogFollowResponse {
start_cursor: Option<String>,
guid: RequestGuid,
}
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone)] #[derive(serde::Serialize, serde::Deserialize, Debug, Clone)]
pub struct LogEntry { pub struct LogEntry {
@@ -111,38 +211,145 @@ pub enum LogSource {
Container(PackageId), Container(PackageId),
} }
pub fn display_logs(all: LogResponse, _: &ArgMatches<'_>) { #[command(
for entry in all.entries.iter() { custom_cli(cli_logs(async, context(CliContext))),
println!("{}", entry); subcommands(self(logs_nofollow(async)), logs_follow),
} display(display_none)
} )]
#[command(display(display_logs))]
pub async fn logs( pub async fn logs(
#[arg] id: PackageId, #[arg] id: PackageId,
#[arg] limit: Option<usize>, #[arg(short = 'l', long = "limit")] limit: Option<usize>,
#[arg] cursor: Option<String>, #[arg(short = 'c', long = "cursor")] cursor: Option<String>,
#[arg] before_flag: Option<bool>, #[arg(short = 'B', long = "before", default)] before: bool,
#[arg(short = 'f', long = "follow", default)] follow: bool,
) -> Result<(PackageId, Option<usize>, Option<String>, bool, bool), Error> {
Ok((id, limit, cursor, before, follow))
}
pub async fn cli_logs(
ctx: CliContext,
(id, limit, cursor, before, follow): (PackageId, Option<usize>, Option<String>, bool, bool),
) -> Result<(), RpcError> {
if follow {
if cursor.is_some() {
return Err(RpcError::from(Error::new(
eyre!("The argument '--cursor <cursor>' cannot be used with '--follow'"),
crate::ErrorKind::InvalidRequest,
)));
}
if before {
return Err(RpcError::from(Error::new(
eyre!("The argument '--before' cannot be used with '--follow'"),
crate::ErrorKind::InvalidRequest,
)));
}
cli_logs_generic_follow(ctx, "package.logs.follow", Some(id), limit).await
} else {
cli_logs_generic_nofollow(ctx, "package.logs", Some(id), limit, cursor, before).await
}
}
pub async fn logs_nofollow(
_ctx: (),
(id, limit, cursor, before, _): (PackageId, Option<usize>, Option<String>, bool, bool),
) -> Result<LogResponse, Error> { ) -> Result<LogResponse, Error> {
Ok(fetch_logs( fetch_logs(LogSource::Container(id), limit, cursor, before).await
LogSource::Container(id), }
limit, #[command(rpc_only, rename = "follow", display(display_none))]
cursor, pub async fn logs_follow(
before_flag.unwrap_or(false), #[context] ctx: RpcContext,
) #[parent_data] (id, limit, _, _, _): (PackageId, Option<usize>, Option<String>, bool, bool),
.await?) ) -> Result<LogFollowResponse, Error> {
follow_logs(ctx, LogSource::Container(id), limit).await
} }
#[instrument] pub async fn cli_logs_generic_nofollow(
pub async fn fetch_logs( ctx: CliContext,
id: LogSource, method: &str,
id: Option<PackageId>,
limit: Option<usize>, limit: Option<usize>,
cursor: Option<String>, cursor: Option<String>,
before_flag: bool, before: bool,
) -> Result<LogResponse, Error> { ) -> Result<(), RpcError> {
let mut cmd = Command::new("journalctl"); let res = rpc_toolkit::command_helpers::call_remote(
ctx.clone(),
method,
serde_json::json!({
"id": id,
"limit": limit,
"cursor": cursor,
"before": before,
}),
PhantomData::<LogResponse>,
)
.await?
.result?;
let limit = limit.unwrap_or(50); for entry in res.entries.iter() {
println!("{}", entry);
}
Ok(())
}
pub async fn cli_logs_generic_follow(
ctx: CliContext,
method: &str,
id: Option<PackageId>,
limit: Option<usize>,
) -> Result<(), RpcError> {
let res = rpc_toolkit::command_helpers::call_remote(
ctx.clone(),
method,
serde_json::json!({
"id": id,
"limit": limit,
}),
PhantomData::<LogFollowResponse>,
)
.await?
.result?;
let mut base_url = ctx.base_url.clone();
let ws_scheme = match base_url.scheme() {
"https" => "wss",
"http" => "ws",
_ => {
return Err(Error::new(
eyre!("Cannot parse scheme from base URL"),
crate::ErrorKind::ParseUrl,
)
.into())
}
};
base_url.set_scheme(ws_scheme).or_else(|_| {
Err(Error::new(
eyre!("Cannot set URL scheme"),
crate::ErrorKind::ParseUrl,
))
})?;
let (mut stream, _) =
// base_url is "http://127.0.0.1/", with a trailing slash, so we don't put a leading slash in this path:
tokio_tungstenite::connect_async(format!("{}ws/rpc/{}", base_url, res.guid)).await?;
while let Some(log) = stream.try_next().await? {
match log {
Message::Text(log) => {
println!("{}", serde_json::from_str::<LogEntry>(&log)?);
}
_ => (),
}
}
Ok(())
}
async fn journalctl(
id: LogSource,
limit: usize,
cursor: Option<&str>,
before: bool,
follow: bool,
) -> Result<LogStream, Error> {
let mut cmd = Command::new("journalctl");
cmd.kill_on_drop(true);
cmd.arg("--output=json"); cmd.arg("--output=json");
cmd.arg("--output-fields=MESSAGE"); cmd.arg("--output-fields=MESSAGE");
@@ -158,21 +365,20 @@ pub async fn fetch_logs(
LogSource::Container(id) => { LogSource::Container(id) => {
cmd.arg(format!( cmd.arg(format!(
"CONTAINER_NAME={}", "CONTAINER_NAME={}",
DockerAction::container_name(&id, None) DockerProcedure::container_name(&id, None)
)); ));
} }
}; };
let cursor_formatted = format!("--after-cursor={}", cursor.clone().unwrap_or("".to_owned())); let cursor_formatted = format!("--after-cursor={}", cursor.clone().unwrap_or(""));
let mut get_prev_logs_and_reverse = false;
if cursor.is_some() { if cursor.is_some() {
cmd.arg(&cursor_formatted); cmd.arg(&cursor_formatted);
if before_flag { if before {
get_prev_logs_and_reverse = true; cmd.arg("--reverse");
} }
} }
if get_prev_logs_and_reverse { if follow {
cmd.arg("--reverse"); cmd.arg("--follow");
} }
let mut child = cmd.stdout(Stdio::piped()).spawn()?; let mut child = cmd.stdout(Stdio::piped()).spawn()?;
@@ -185,7 +391,7 @@ pub async fn fetch_logs(
let journalctl_entries = LinesStream::new(out.lines()); let journalctl_entries = LinesStream::new(out.lines());
let mut deserialized_entries = journalctl_entries let deserialized_entries = journalctl_entries
.map_err(|e| Error::new(e, crate::ErrorKind::Journald)) .map_err(|e| Error::new(e, crate::ErrorKind::Journald))
.and_then(|s| { .and_then(|s| {
futures::future::ready( futures::future::ready(
@@ -194,16 +400,37 @@ pub async fn fetch_logs(
) )
}); });
Ok(LogStream {
_child: child,
entries: deserialized_entries.boxed(),
})
}
#[instrument]
pub async fn fetch_logs(
id: LogSource,
limit: Option<usize>,
cursor: Option<String>,
before: bool,
) -> Result<LogResponse, Error> {
let limit = limit.unwrap_or(50);
let mut stream = journalctl(id, limit, cursor.as_deref(), before, false).await?;
let mut entries = Vec::with_capacity(limit); let mut entries = Vec::with_capacity(limit);
let mut start_cursor = None; let mut start_cursor = None;
if let Some(first) = deserialized_entries.try_next().await? { if let Some(first) = tokio::time::timeout(Duration::from_secs(1), stream.try_next())
.await
.ok()
.transpose()?
.flatten()
{
let (cursor, entry) = first.log_entry()?; let (cursor, entry) = first.log_entry()?;
start_cursor = Some(cursor); start_cursor = Some(cursor);
entries.push(entry); entries.push(entry);
} }
let (mut end_cursor, entries) = deserialized_entries let (mut end_cursor, entries) = stream
.try_fold( .try_fold(
(start_cursor.clone(), entries), (start_cursor.clone(), entries),
|(_, mut acc), entry| async move { |(_, mut acc), entry| async move {
@@ -215,7 +442,7 @@ pub async fn fetch_logs(
.await?; .await?;
let mut entries = Reversible::new(entries); let mut entries = Reversible::new(entries);
// reverse again so output is always in increasing chronological order // reverse again so output is always in increasing chronological order
if get_prev_logs_and_reverse { if cursor.is_some() && before {
entries.reverse(); entries.reverse();
std::mem::swap(&mut start_cursor, &mut end_cursor); std::mem::swap(&mut start_cursor, &mut end_cursor);
} }
@@ -226,21 +453,81 @@ pub async fn fetch_logs(
}) })
} }
#[tokio::test] #[instrument(skip(ctx))]
pub async fn test_logs() { pub async fn follow_logs(
let response = fetch_logs( ctx: RpcContext,
// change `tor.service` to an actual journald unit on your machine id: LogSource,
// LogSource::Service("tor.service"), limit: Option<usize>,
// first run `docker run --name=hello-world.embassy --log-driver=journald hello-world` ) -> Result<LogFollowResponse, Error> {
LogSource::Container("hello-world".parse().unwrap()), let limit = limit.unwrap_or(50);
// Some(5), let mut stream = journalctl(id, limit, None, false, true).await?;
None,
None, let mut start_cursor = None;
// Some("s=1b8c418e28534400856c27b211dd94fd;i=5a7;b=97571c13a1284f87bc0639b5cff5acbe;m=740e916;t=5ca073eea3445;x=f45bc233ca328348".to_owned()), let mut first_entry = None;
false,
if let Some(first) = tokio::time::timeout(Duration::from_secs(1), stream.try_next())
.await
.ok()
.transpose()?
.flatten()
{
let (cursor, entry) = first.log_entry()?;
start_cursor = Some(cursor);
first_entry = Some(entry);
}
let guid = RequestGuid::new();
ctx.add_continuation(
guid.clone(),
RpcContinuation::ws(
Box::new(move |ws_fut| ws_handler(first_entry, stream, ws_fut).boxed()),
Duration::from_secs(30),
),
) )
.await .await;
.unwrap(); Ok(LogFollowResponse { start_cursor, guid })
let serialized = serde_json::to_string_pretty(&response).unwrap();
println!("{}", serialized);
} }
// #[tokio::test]
// pub async fn test_logs() {
// let response = fetch_logs(
// // change `tor.service` to an actual journald unit on your machine
// // LogSource::Service("tor.service"),
// // first run `docker run --name=hello-world.embassy --log-driver=journald hello-world`
// LogSource::Container("hello-world".parse().unwrap()),
// // Some(5),
// None,
// None,
// // Some("s=1b8c418e28534400856c27b211dd94fd;i=5a7;b=97571c13a1284f87bc0639b5cff5acbe;m=740e916;t=5ca073eea3445;x=f45bc233ca328348".to_owned()),
// false,
// true,
// )
// .await
// .unwrap();
// let serialized = serde_json::to_string_pretty(&response).unwrap();
// println!("{}", serialized);
// }
// #[tokio::test]
// pub async fn test_logs() {
// let mut cmd = Command::new("journalctl");
// cmd.kill_on_drop(true);
// cmd.arg("-f");
// cmd.arg("CONTAINER_NAME=hello-world.embassy");
// let mut child = cmd.stdout(Stdio::piped()).spawn().unwrap();
// let out = BufReader::new(
// child
// .stdout
// .take()
// .ok_or_else(|| Error::new(eyre!("No stdout available"), crate::ErrorKind::Journald))
// .unwrap(),
// );
// let mut journalctl_entries = LinesStream::new(out.lines());
// while let Some(line) = journalctl_entries.try_next().await.unwrap() {
// dbg!(line);
// }
// }

View File

@@ -1,16 +1,95 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use std::sync::atomic::{AtomicBool, Ordering}; use std::sync::atomic::{AtomicBool, Ordering};
use patch_db::{DbHandle, LockType}; use patch_db::{DbHandle, LockReceipt, LockType};
use tracing::instrument; use tracing::instrument;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::db::model::CurrentDependents;
use crate::dependencies::{break_transitive, heal_transitive, DependencyError}; use crate::dependencies::{break_transitive, heal_transitive, DependencyError};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::{Manifest, PackageId};
use crate::status::health_check::{HealthCheckId, HealthCheckResult}; use crate::status::health_check::{HealthCheckId, HealthCheckResult};
use crate::status::MainStatus; use crate::status::MainStatus;
use crate::Error; use crate::Error;
struct HealthCheckPreInformationReceipt {
status_model: LockReceipt<MainStatus, ()>,
manifest: LockReceipt<Manifest, ()>,
}
impl HealthCheckPreInformationReceipt {
pub async fn new(db: &'_ mut impl DbHandle, id: &PackageId) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, id);
setup(&db.lock_all(locks).await?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
id: &PackageId,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let status_model = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.status().main())
.make_locker(LockType::Read)
.add_to_keys(locks);
let manifest = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.manifest())
.make_locker(LockType::Read)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
status_model: status_model.verify(skeleton_key)?,
manifest: manifest.verify(skeleton_key)?,
})
}
}
}
struct HealthCheckStatusReceipt {
status: LockReceipt<MainStatus, ()>,
current_dependents: LockReceipt<CurrentDependents, ()>,
}
impl HealthCheckStatusReceipt {
pub async fn new(db: &'_ mut impl DbHandle, id: &PackageId) -> Result<Self, Error> {
let mut locks = Vec::new();
let setup = Self::setup(&mut locks, id);
setup(&db.lock_all(locks).await?)
}
pub fn setup(
locks: &mut Vec<patch_db::LockTargetId>,
id: &PackageId,
) -> impl FnOnce(&patch_db::Verifier) -> Result<Self, Error> {
let status = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.status().main())
.make_locker(LockType::Write)
.add_to_keys(locks);
let current_dependents = crate::db::DatabaseModel::new()
.package_data()
.idx_model(id)
.and_then(|x| x.installed())
.map(|x| x.current_dependents())
.make_locker(LockType::Read)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
status: status.verify(skeleton_key)?,
current_dependents: current_dependents.verify(skeleton_key)?,
})
}
}
}
#[instrument(skip(ctx, db))] #[instrument(skip(ctx, db))]
pub async fn check<Db: DbHandle>( pub async fn check<Db: DbHandle>(
ctx: &RpcContext, ctx: &RpcContext,
@@ -19,35 +98,17 @@ pub async fn check<Db: DbHandle>(
should_commit: &AtomicBool, should_commit: &AtomicBool,
) -> Result<(), Error> { ) -> Result<(), Error> {
let mut tx = db.begin().await?; let mut tx = db.begin().await?;
let (manifest, started) = {
let mut checkpoint = tx.begin().await?;
let receipts = HealthCheckPreInformationReceipt::new(&mut checkpoint, id).await?;
let mut checkpoint = tx.begin().await?; let manifest = receipts.manifest.get(&mut checkpoint).await?;
let installed_model = crate::db::DatabaseModel::new() let started = receipts.status_model.get(&mut checkpoint).await?.started();
.package_data()
.idx_model(id)
.expect(&mut checkpoint)
.await?
.installed()
.expect(&mut checkpoint)
.await?;
let manifest = installed_model checkpoint.save().await?;
.clone() (manifest, started)
.manifest() };
.get(&mut checkpoint, true)
.await?
.into_owned();
let started = installed_model
.clone()
.status()
.main()
.started()
.get(&mut checkpoint, true)
.await?
.into_owned();
checkpoint.save().await?;
let health_results = if let Some(started) = started { let health_results = if let Some(started) = started {
manifest manifest
@@ -61,44 +122,35 @@ pub async fn check<Db: DbHandle>(
if !should_commit.load(Ordering::SeqCst) { if !should_commit.load(Ordering::SeqCst) {
return Ok(()); return Ok(());
} }
let current_dependents = {
let mut checkpoint = tx.begin().await?;
let receipts = HealthCheckStatusReceipt::new(&mut checkpoint, id).await?;
let mut checkpoint = tx.begin().await?; let status = receipts.status.get(&mut checkpoint).await?;
crate::db::DatabaseModel::new() if let MainStatus::Running { health: _, started } = status {
.package_data() receipts
.lock(&mut checkpoint, LockType::Write) .status
.await?; .set(
&mut checkpoint,
let mut status = crate::db::DatabaseModel::new() MainStatus::Running {
.package_data() health: health_results.clone(),
.idx_model(id) started,
.expect(&mut checkpoint) },
.await? )
.installed() .await?;
.expect(&mut checkpoint)
.await?
.status()
.main()
.get_mut(&mut checkpoint)
.await?;
match &mut *status {
MainStatus::Running { health, .. } => {
*health = health_results.clone();
} }
_ => (), let current_dependents = receipts.current_dependents.get(&mut checkpoint).await?;
}
status.save(&mut checkpoint).await?; checkpoint.save().await?;
current_dependents
};
let current_dependents = installed_model tracing::debug!("Checking health of {}", id);
.current_dependents() let receipts = crate::dependencies::BreakTransitiveReceipts::new(&mut tx).await?;
.get(&mut checkpoint, true) tracing::debug!("Got receipts {}", id);
.await?;
checkpoint.save().await?; for (dependent, info) in (current_dependents).0.iter() {
for (dependent, info) in &*current_dependents {
let failures: BTreeMap<HealthCheckId, HealthCheckResult> = health_results let failures: BTreeMap<HealthCheckId, HealthCheckResult> = health_results
.iter() .iter()
.filter(|(_, hc_res)| !matches!(hc_res, HealthCheckResult::Success { .. })) .filter(|(_, hc_res)| !matches!(hc_res, HealthCheckResult::Success { .. }))
@@ -113,10 +165,11 @@ pub async fn check<Db: DbHandle>(
id, id,
DependencyError::HealthChecksFailed { failures }, DependencyError::HealthChecksFailed { failures },
&mut BTreeMap::new(), &mut BTreeMap::new(),
&receipts,
) )
.await?; .await?;
} else { } else {
heal_transitive(ctx, &mut tx, &dependent, id).await?; heal_transitive(ctx, &mut tx, &dependent, id, &receipts.dependency_receipt).await?;
} }
} }

View File

@@ -12,20 +12,20 @@ use color_eyre::eyre::eyre;
use nix::sys::signal::Signal; use nix::sys::signal::Signal;
use num_enum::TryFromPrimitive; use num_enum::TryFromPrimitive;
use patch_db::DbHandle; use patch_db::DbHandle;
use sqlx::{Executor, Sqlite}; use sqlx::{Executor, Postgres};
use tokio::sync::watch::error::RecvError; use tokio::sync::watch::error::RecvError;
use tokio::sync::watch::{channel, Receiver, Sender}; use tokio::sync::watch::{channel, Receiver, Sender};
use tokio::sync::{Notify, RwLock}; use tokio::sync::{Notify, RwLock};
use torut::onion::TorSecretKeyV3; use torut::onion::TorSecretKeyV3;
use tracing::instrument; use tracing::instrument;
use crate::action::docker::DockerAction;
use crate::action::{ActionImplementation, NoOutput};
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::manager::sync::synchronizer; use crate::manager::sync::synchronizer;
use crate::net::interface::InterfaceId; use crate::net::interface::InterfaceId;
use crate::net::GeneratedCertificateMountPoint; use crate::net::GeneratedCertificateMountPoint;
use crate::notifications::NotificationLevel; use crate::notifications::NotificationLevel;
use crate::procedure::docker::DockerProcedure;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::{Manifest, PackageId}; use crate::s9pk::manifest::{Manifest, PackageId};
use crate::status::MainStatus; use crate::status::MainStatus;
use crate::util::{Container, NonDetachingJoinHandle, Version}; use crate::util::{Container, NonDetachingJoinHandle, Version};
@@ -47,7 +47,7 @@ impl ManagerMap {
secrets: &mut Ex, secrets: &mut Ex,
) -> Result<(), Error> ) -> Result<(), Error>
where where
for<'a> &'a mut Ex: Executor<'a, Database = Sqlite>, for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{ {
let mut res = BTreeMap::new(); let mut res = BTreeMap::new();
for package in crate::db::DatabaseModel::new() for package in crate::db::DatabaseModel::new()
@@ -229,7 +229,10 @@ async fn run_main(
break; break;
} }
} }
Err(bollard::errors::Error::DockerResponseNotFoundError { .. }) => (), Err(bollard::errors::Error::DockerResponseServerError {
status_code: 404, // NOT FOUND
..
}) => (),
Err(e) => Err(e)?, Err(e) => Err(e)?,
} }
match futures::poll!(&mut runtime) { match futures::poll!(&mut runtime) {
@@ -293,6 +296,7 @@ async fn run_main(
.net_controller .net_controller
.remove( .remove(
&state.manifest.id, &state.manifest.id,
ip,
state.manifest.interfaces.0.keys().cloned(), state.manifest.interfaces.0.keys().cloned(),
) )
.await?; .await?;
@@ -312,7 +316,7 @@ async fn start_up_image(
&rt_state.ctx, &rt_state.ctx,
&rt_state.manifest.id, &rt_state.manifest.id,
&rt_state.manifest.version, &rt_state.manifest.version,
None, ProcedureName::Main,
&rt_state.manifest.volumes, &rt_state.manifest.volumes,
None, None,
false, false,
@@ -333,7 +337,7 @@ impl Manager {
ctx, ctx,
status: AtomicUsize::new(Status::Stopped as usize), status: AtomicUsize::new(Status::Stopped as usize),
on_stop, on_stop,
container_name: DockerAction::container_name(&manifest.id, None), container_name: DockerProcedure::container_name(&manifest.id, None),
manifest, manifest,
tor_keys, tor_keys,
synchronized: Notify::new(), synchronized: Notify::new(),
@@ -374,8 +378,13 @@ impl Manager {
.or_else(|e| { .or_else(|e| {
if matches!( if matches!(
e, e,
bollard::errors::Error::DockerResponseConflictError { .. } bollard::errors::Error::DockerResponseServerError {
| bollard::errors::Error::DockerResponseNotFoundError { .. } status_code: 409, // CONFLICT
..
} | bollard::errors::Error::DockerResponseServerError {
status_code: 404, // NOT FOUND
..
}
) { ) {
Ok(()) Ok(())
} else { } else {
@@ -391,6 +400,11 @@ impl Manager {
.commit_health_check_results .commit_health_check_results
.store(false, Ordering::SeqCst); .store(false, Ordering::SeqCst);
let _ = self.shared.on_stop.send(OnStop::Exit); let _ = self.shared.on_stop.send(OnStop::Exit);
let action = match &self.shared.manifest.main {
PackageProcedure::Docker(a) => a,
#[cfg(feature = "js_engine")]
PackageProcedure::Script(_) => return Ok(()),
};
match self match self
.shared .shared
.ctx .ctx
@@ -398,20 +412,27 @@ impl Manager {
.stop_container( .stop_container(
&self.shared.container_name, &self.shared.container_name,
Some(StopContainerOptions { Some(StopContainerOptions {
t: match &self.shared.manifest.main { t: action
ActionImplementation::Docker(a) => a, .sigterm_timeout
} .map(|a| *a)
.sigterm_timeout .unwrap_or(Duration::from_secs(30))
.map(|a| *a) .as_secs_f64() as i64,
.unwrap_or(Duration::from_secs(30))
.as_secs_f64() as i64,
}), }),
) )
.await .await
{ {
Err(bollard::errors::Error::DockerResponseNotFoundError { .. }) Err(bollard::errors::Error::DockerResponseServerError {
| Err(bollard::errors::Error::DockerResponseConflictError { .. }) status_code: 404, // NOT FOUND
| Err(bollard::errors::Error::DockerResponseNotModifiedError { .. }) => (), // Already stopped ..
})
| Err(bollard::errors::Error::DockerResponseServerError {
status_code: 409, // CONFLICT
..
})
| Err(bollard::errors::Error::DockerResponseServerError {
status_code: 304, // NOT MODIFIED
..
}) => (), // Already stopped
a => a?, a => a?,
}; };
self.shared.status.store( self.shared.status.store(
@@ -542,26 +563,38 @@ async fn stop(shared: &ManagerSharedState) -> Result<(), Error> {
) { ) {
resume(shared).await?; resume(shared).await?;
} }
let action = match &shared.manifest.main {
PackageProcedure::Docker(a) => a,
#[cfg(feature = "js_engine")]
PackageProcedure::Script(_) => return Ok(()),
};
match shared match shared
.ctx .ctx
.docker .docker
.stop_container( .stop_container(
&shared.container_name, &shared.container_name,
Some(StopContainerOptions { Some(StopContainerOptions {
t: match &shared.manifest.main { t: action
ActionImplementation::Docker(a) => a, .sigterm_timeout
} .map(|a| *a)
.sigterm_timeout .unwrap_or(Duration::from_secs(30))
.map(|a| *a) .as_secs_f64() as i64,
.unwrap_or(Duration::from_secs(30))
.as_secs_f64() as i64,
}), }),
) )
.await .await
{ {
Err(bollard::errors::Error::DockerResponseNotFoundError { .. }) Err(bollard::errors::Error::DockerResponseServerError {
| Err(bollard::errors::Error::DockerResponseConflictError { .. }) status_code: 404, // NOT FOUND
| Err(bollard::errors::Error::DockerResponseNotModifiedError { .. }) => (), // Already stopped ..
})
| Err(bollard::errors::Error::DockerResponseServerError {
status_code: 409, // CONFLICT
..
})
| Err(bollard::errors::Error::DockerResponseServerError {
status_code: 304, // NOT MODIFIED
..
}) => (), // Already stopped
a => a?, a => a?,
}; };
shared.status.store( shared.status.store(

View File

@@ -31,7 +31,10 @@ async fn synchronize_once(shared: &ManagerSharedState) -> Result<Status, Error>
MainStatus::Stopping => { MainStatus::Stopping => {
*status = MainStatus::Stopped; *status = MainStatus::Stopped;
} }
MainStatus::Starting => { MainStatus::Restarting => {
*status = MainStatus::Starting { restarting: true };
}
MainStatus::Starting { .. } => {
start(shared).await?; start(shared).await?;
} }
MainStatus::Running { started, .. } => { MainStatus::Running { started, .. } => {
@@ -41,19 +44,19 @@ async fn synchronize_once(shared: &ManagerSharedState) -> Result<Status, Error>
MainStatus::BackingUp { .. } => (), MainStatus::BackingUp { .. } => (),
}, },
Status::Starting => match *status { Status::Starting => match *status {
MainStatus::Stopped | MainStatus::Stopping => { MainStatus::Stopped | MainStatus::Stopping | MainStatus::Restarting => {
stop(shared).await?; stop(shared).await?;
} }
MainStatus::Starting | MainStatus::Running { .. } => (), MainStatus::Starting { .. } | MainStatus::Running { .. } => (),
MainStatus::BackingUp { .. } => { MainStatus::BackingUp { .. } => {
pause(shared).await?; pause(shared).await?;
} }
}, },
Status::Running => match *status { Status::Running => match *status {
MainStatus::Stopped | MainStatus::Stopping => { MainStatus::Stopped | MainStatus::Stopping | MainStatus::Restarting => {
stop(shared).await?; stop(shared).await?;
} }
MainStatus::Starting => { MainStatus::Starting { .. } => {
*status = MainStatus::Running { *status = MainStatus::Running {
started: Utc::now(), started: Utc::now(),
health: BTreeMap::new(), health: BTreeMap::new(),
@@ -65,10 +68,10 @@ async fn synchronize_once(shared: &ManagerSharedState) -> Result<Status, Error>
} }
}, },
Status::Paused => match *status { Status::Paused => match *status {
MainStatus::Stopped | MainStatus::Stopping => { MainStatus::Stopped | MainStatus::Stopping | MainStatus::Restarting => {
stop(shared).await?; stop(shared).await?;
} }
MainStatus::Starting | MainStatus::Running { .. } => { MainStatus::Starting { .. } | MainStatus::Running { .. } => {
resume(shared).await?; resume(shared).await?;
} }
MainStatus::BackingUp { .. } => (), MainStatus::BackingUp { .. } => (),

View File

@@ -12,7 +12,9 @@ use rpc_toolkit::command_helpers::prelude::RequestParts;
use rpc_toolkit::hyper::header::COOKIE; use rpc_toolkit::hyper::header::COOKIE;
use rpc_toolkit::hyper::http::Error as HttpError; use rpc_toolkit::hyper::http::Error as HttpError;
use rpc_toolkit::hyper::{Body, Request, Response}; use rpc_toolkit::hyper::{Body, Request, Response};
use rpc_toolkit::rpc_server_helpers::{noop3, to_response, DynMiddleware, DynMiddlewareStage2}; use rpc_toolkit::rpc_server_helpers::{
noop4, to_response, DynMiddleware, DynMiddlewareStage2, DynMiddlewareStage3,
};
use rpc_toolkit::yajrc::RpcMethod; use rpc_toolkit::yajrc::RpcMethod;
use rpc_toolkit::Metadata; use rpc_toolkit::Metadata;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@@ -34,33 +36,21 @@ impl HasLoggedOutSessions {
logged_out_sessions: impl IntoIterator<Item = impl AsLogoutSessionId>, logged_out_sessions: impl IntoIterator<Item = impl AsLogoutSessionId>,
ctx: &RpcContext, ctx: &RpcContext,
) -> Result<Self, Error> { ) -> Result<Self, Error> {
let sessions = logged_out_sessions let mut open_authed_websockets = ctx.open_authed_websockets.lock().await;
.into_iter()
.by_ref()
.map(|x| x.as_logout_session_id())
.collect::<Vec<_>>();
let mut sqlx_conn = ctx.secret_store.acquire().await?; let mut sqlx_conn = ctx.secret_store.acquire().await?;
for session in &sessions { for session in logged_out_sessions {
let session = session.as_logout_session_id();
sqlx::query!( sqlx::query!(
"UPDATE session SET logged_out = CURRENT_TIMESTAMP WHERE id = ?", "UPDATE session SET logged_out = CURRENT_TIMESTAMP WHERE id = $1",
session session
) )
.execute(&mut sqlx_conn) .execute(&mut sqlx_conn)
.await?; .await?;
} for socket in open_authed_websockets.remove(&session).unwrap_or_default() {
drop(sqlx_conn);
for session in sessions {
for socket in ctx
.open_authed_websockets
.lock()
.await
.remove(&session)
.unwrap_or_default()
{
let _ = socket.send(()); let _ = socket.send(());
} }
} }
Ok(Self(())) Ok(HasLoggedOutSessions(()))
} }
} }
@@ -78,7 +68,7 @@ impl HasValidSession {
pub async fn from_session(session: &HashSessionToken, ctx: &RpcContext) -> Result<Self, Error> { pub async fn from_session(session: &HashSessionToken, ctx: &RpcContext) -> Result<Self, Error> {
let session_hash = session.hashed(); let session_hash = session.hashed();
let session = sqlx::query!("UPDATE session SET last_active = CURRENT_TIMESTAMP WHERE id = ? AND logged_out IS NULL OR logged_out > CURRENT_TIMESTAMP", session_hash) let session = sqlx::query!("UPDATE session SET last_active = CURRENT_TIMESTAMP WHERE id = $1 AND logged_out IS NULL OR logged_out > CURRENT_TIMESTAMP", session_hash)
.execute(&mut ctx.secret_store.acquire().await?) .execute(&mut ctx.secret_store.acquire().await?)
.await?; .await?;
if session.rows_affected() == 0 { if session.rows_affected() == 0 {
@@ -210,8 +200,7 @@ pub fn auth<M: Metadata>(ctx: RpcContext) -> DynMiddleware<M> {
|_| StatusCode::OK, |_| StatusCode::OK,
)?)); )?));
} else if rpc_req.method.as_str() == "auth.login" { } else if rpc_req.method.as_str() == "auth.login" {
let mut guard = rate_limiter.lock().await; let guard = rate_limiter.lock().await;
guard.0 += 1;
if guard.1.elapsed() < Duration::from_secs(20) { if guard.1.elapsed() < Duration::from_secs(20) {
if guard.0 >= 3 { if guard.0 >= 3 {
let (res_parts, _) = Response::new(()).into_parts(); let (res_parts, _) = Response::new(()).into_parts();
@@ -228,13 +217,25 @@ pub fn auth<M: Metadata>(ctx: RpcContext) -> DynMiddleware<M> {
|_| StatusCode::OK, |_| StatusCode::OK,
)?)); )?));
} }
}
}
}
let m3: DynMiddlewareStage3 = Box::new(move |_, res| {
async move {
let mut guard = rate_limiter.lock().await;
if guard.1.elapsed() < Duration::from_secs(20) {
if res.is_err() {
guard.0 += 1;
}
} else { } else {
guard.0 = 0; guard.0 = 0;
} }
guard.1 = Instant::now(); guard.1 = Instant::now();
Ok(Ok(noop4()))
} }
} .boxed()
Ok(Ok(noop3())) });
Ok(Ok(m3))
} }
.boxed() .boxed()
}); });

View File

@@ -0,0 +1,84 @@
use color_eyre::eyre::eyre;
use futures::future::BoxFuture;
use futures::FutureExt;
use http::HeaderValue;
use rpc_toolkit::hyper::http::Error as HttpError;
use rpc_toolkit::hyper::{Body, Request, Response};
use rpc_toolkit::rpc_server_helpers::{
noop4, DynMiddleware, DynMiddlewareStage2, DynMiddlewareStage3,
};
use rpc_toolkit::yajrc::RpcMethod;
use rpc_toolkit::Metadata;
use crate::context::RpcContext;
use crate::{Error, ResultExt};
pub fn db<M: Metadata>(ctx: RpcContext) -> DynMiddleware<M> {
Box::new(
move |_: &mut Request<Body>,
metadata: M|
-> BoxFuture<Result<Result<DynMiddlewareStage2, Response<Body>>, HttpError>> {
let ctx = ctx.clone();
async move {
let m2: DynMiddlewareStage2 = Box::new(move |req, rpc_req| {
async move {
let seq = req.headers.remove("x-patch-sequence");
let sync_db = metadata
.get(rpc_req.method.as_str(), "sync_db")
.unwrap_or(false);
let m3: DynMiddlewareStage3 = Box::new(move |res, _| {
async move {
if sync_db && seq.is_some() {
match async {
let seq = seq
.ok_or_else(|| {
Error::new(
eyre!("Missing X-Patch-Sequence"),
crate::ErrorKind::InvalidRequest,
)
})?
.to_str()
.with_kind(crate::ErrorKind::InvalidRequest)?
.parse()?;
let res = ctx.db.sync(seq).await?;
let json = match res {
Ok(revs) => serde_json::to_vec(&revs),
Err(dump) => serde_json::to_vec(&[dump]),
}
.with_kind(crate::ErrorKind::Serialization)?;
Ok::<_, Error>(base64::encode_config(
&json,
base64::URL_SAFE,
))
}
.await
{
Ok(a) => res
.headers
.append("X-Patch-Updates", HeaderValue::from_str(&a)?),
Err(e) => res.headers.append(
"X-Patch-Error",
HeaderValue::from_str(
&base64::encode_config(
&e.to_string(),
base64::URL_SAFE,
),
)?,
),
};
}
Ok(Ok(noop4()))
}
.boxed()
});
Ok(Ok(m3))
}
.boxed()
});
Ok(Ok(m2))
}
.boxed()
},
)
}

View File

@@ -1,24 +1,14 @@
use std::future::Future;
use std::sync::Arc; use std::sync::Arc;
use aes::cipher::{CipherKey, NewCipher, Nonce, StreamCipher}; use aes::cipher::{CipherKey, NewCipher, Nonce, StreamCipher};
use aes::Aes256Ctr; use aes::Aes256Ctr;
use color_eyre::eyre::eyre; use futures::Stream;
use futures::future::BoxFuture;
use futures::{FutureExt, Stream};
use hmac::Hmac; use hmac::Hmac;
use http::{HeaderMap, HeaderValue}; use josekit::jwk::Jwk;
use rpc_toolkit::hyper::http::Error as HttpError; use rpc_toolkit::hyper::{self, Body};
use rpc_toolkit::hyper::{self, Body, Request, Response, StatusCode}; use serde::{Deserialize, Serialize};
use rpc_toolkit::rpc_server_helpers::{
to_response, DynMiddleware, DynMiddlewareStage2, DynMiddlewareStage3, DynMiddlewareStage4,
};
use rpc_toolkit::yajrc::RpcMethod;
use rpc_toolkit::Metadata;
use sha2::Sha256; use sha2::Sha256;
use tracing::instrument;
use crate::util::Apply;
use crate::Error;
pub fn pbkdf2(password: impl AsRef<[u8]>, salt: impl AsRef<[u8]>) -> CipherKey<Aes256Ctr> { pub fn pbkdf2(password: impl AsRef<[u8]>, salt: impl AsRef<[u8]>) -> CipherKey<Aes256Ctr> {
let mut aeskey = CipherKey::<Aes256Ctr>::default(); let mut aeskey = CipherKey::<Aes256Ctr>::default();
@@ -35,7 +25,7 @@ pub fn encrypt_slice(input: impl AsRef<[u8]>, password: impl AsRef<[u8]>) -> Vec
let prefix: [u8; 32] = rand::random(); let prefix: [u8; 32] = rand::random();
let aeskey = pbkdf2(password.as_ref(), &prefix[16..]); let aeskey = pbkdf2(password.as_ref(), &prefix[16..]);
let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]); let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]);
let mut aes = Aes256Ctr::new(&aeskey, &ctr); let mut aes = Aes256Ctr::new(&aeskey, ctr);
let mut res = Vec::with_capacity(32 + input.as_ref().len()); let mut res = Vec::with_capacity(32 + input.as_ref().len());
res.extend_from_slice(&prefix[..]); res.extend_from_slice(&prefix[..]);
res.extend_from_slice(input.as_ref()); res.extend_from_slice(input.as_ref());
@@ -50,225 +40,79 @@ pub fn decrypt_slice(input: impl AsRef<[u8]>, password: impl AsRef<[u8]>) -> Vec
let (prefix, rest) = input.as_ref().split_at(32); let (prefix, rest) = input.as_ref().split_at(32);
let aeskey = pbkdf2(password.as_ref(), &prefix[16..]); let aeskey = pbkdf2(password.as_ref(), &prefix[16..]);
let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]); let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]);
let mut aes = Aes256Ctr::new(&aeskey, &ctr); let mut aes = Aes256Ctr::new(&aeskey, ctr);
let mut res = rest.to_vec(); let mut res = rest.to_vec();
aes.apply_keystream(&mut res); aes.apply_keystream(&mut res);
res res
} }
#[pin_project::pin_project] #[derive(Debug, Clone, Deserialize, Serialize)]
pub struct DecryptStream { pub struct EncryptedWire {
key: Arc<String>, encrypted: serde_json::Value,
#[pin]
body: Body,
ctr: Vec<u8>,
salt: Vec<u8>,
aes: Option<Aes256Ctr>,
}
impl DecryptStream {
pub fn new(key: Arc<String>, body: Body) -> Self {
DecryptStream {
key,
body,
ctr: Vec::new(),
salt: Vec::new(),
aes: None,
}
}
}
impl Stream for DecryptStream {
type Item = hyper::Result<hyper::body::Bytes>;
fn poll_next(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Option<Self::Item>> {
let this = self.project();
match this.body.poll_next(cx) {
std::task::Poll::Pending => std::task::Poll::Pending,
std::task::Poll::Ready(Some(Ok(bytes))) => std::task::Poll::Ready(Some(Ok({
let mut buf = &*bytes;
if let Some(aes) = this.aes.as_mut() {
let mut res = buf.to_vec();
aes.apply_keystream(&mut res);
res.into()
} else {
if this.ctr.len() < 16 && buf.len() > 0 {
let to_read = std::cmp::min(16 - this.ctr.len(), buf.len());
this.ctr.extend_from_slice(&buf[0..to_read]);
buf = &buf[to_read..];
}
if this.salt.len() < 16 && buf.len() > 0 {
let to_read = std::cmp::min(16 - this.salt.len(), buf.len());
this.salt.extend_from_slice(&buf[0..to_read]);
buf = &buf[to_read..];
}
if this.ctr.len() == 16 && this.salt.len() == 16 {
let aeskey = pbkdf2(this.key.as_bytes(), &this.salt);
let ctr = Nonce::<Aes256Ctr>::from_slice(&this.ctr);
let mut aes = Aes256Ctr::new(&aeskey, &ctr);
let mut res = buf.to_vec();
aes.apply_keystream(&mut res);
*this.aes = Some(aes);
res.into()
} else {
hyper::body::Bytes::new()
}
}
}))),
std::task::Poll::Ready(a) => std::task::Poll::Ready(a),
}
}
} }
impl EncryptedWire {
#[instrument(skip(current_secret))]
pub fn decrypt(self, current_secret: impl AsRef<Jwk>) -> Option<String> {
let current_secret = current_secret.as_ref();
#[pin_project::pin_project] let decrypter = match josekit::jwe::alg::ecdh_es::EcdhEsJweAlgorithm::EcdhEs
pub struct EncryptStream { .decrypter_from_jwk(current_secret)
#[pin] {
body: Body, Ok(a) => a,
aes: Aes256Ctr, Err(e) => {
prefix: Option<[u8; 32]>, tracing::warn!("Could not setup awk");
} tracing::debug!("{:?}", e);
impl EncryptStream { return None;
pub fn new(key: &str, body: Body) -> Self { }
let prefix: [u8; 32] = rand::random(); };
let aeskey = pbkdf2(key.as_bytes(), &prefix[16..]); let encrypted = match serde_json::to_string(&self.encrypted) {
let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]); Ok(a) => a,
let aes = Aes256Ctr::new(&aeskey, &ctr); Err(e) => {
EncryptStream { tracing::warn!("Could not deserialize");
body, tracing::debug!("{:?}", e);
aes,
prefix: Some(prefix), return None;
} }
} };
} let (decoded, _) = match josekit::jwe::deserialize_json(&encrypted, &decrypter) {
impl Stream for EncryptStream { Ok(a) => a,
type Item = hyper::Result<hyper::body::Bytes>; Err(e) => {
fn poll_next( tracing::warn!("Could not decrypt");
self: std::pin::Pin<&mut Self>, tracing::debug!("{:?}", e);
cx: &mut std::task::Context<'_>, return None;
) -> std::task::Poll<Option<Self::Item>> { }
let this = self.project(); };
if let Some(prefix) = this.prefix.take() { match String::from_utf8(decoded) {
std::task::Poll::Ready(Some(Ok(prefix.to_vec().into()))) Ok(a) => Some(a),
} else { Err(e) => {
match this.body.poll_next(cx) { tracing::warn!("Could not decrypt into utf8");
std::task::Poll::Pending => std::task::Poll::Pending, tracing::debug!("{:?}", e);
std::task::Poll::Ready(Some(Ok(bytes))) => std::task::Poll::Ready(Some(Ok({ return None;
let mut res = bytes.to_vec();
this.aes.apply_keystream(&mut res);
res.into()
}))),
std::task::Poll::Ready(a) => std::task::Poll::Ready(a),
} }
} }
} }
} }
fn encrypted(headers: &HeaderMap) -> bool { /// We created this test by first making the private key, then restoring from this private key for recreatability.
headers /// After this the frontend then encoded an password, then we are testing that the output that we got (hand coded)
.get("Content-Encoding") /// will be the shape we want.
.and_then(|h| { #[test]
h.to_str() fn test_gen_awk() {
.ok()? let private_key: Jwk = serde_json::from_str(
.split(",") r#"{
.any(|s| s == "aesctr256") "kty": "EC",
.apply(Some) "crv": "P-256",
}) "d": "3P-MxbUJtEhdGGpBCRFXkUneGgdyz_DGZWfIAGSCHOU",
.unwrap_or_default() "x": "yHTDYSfjU809fkSv9MmN4wuojf5c3cnD7ZDN13n-jz4",
} "y": "8Mpkn744A5KDag0DmX2YivB63srjbugYZzWc3JOpQXI"
}"#,
pub fn encrypt<
F: Fn() -> Fut + Send + Sync + Clone + 'static,
Fut: Future<Output = Result<Arc<String>, Error>> + Send + Sync + 'static,
M: Metadata,
>(
keysource: F,
) -> DynMiddleware<M> {
Box::new(
move |req: &mut Request<Body>,
metadata: M|
-> BoxFuture<Result<Result<DynMiddlewareStage2, Response<Body>>, HttpError>> {
let keysource = keysource.clone();
async move {
let encrypted = encrypted(req.headers());
let key = if encrypted {
let key = match keysource().await {
Ok(s) => s,
Err(e) => {
let (res_parts, _) = Response::new(()).into_parts();
return Ok(Err(to_response(
req.headers(),
res_parts,
Err(e.into()),
|_| StatusCode::OK,
)?));
}
};
let body = std::mem::take(req.body_mut());
*req.body_mut() = Body::wrap_stream(DecryptStream::new(key.clone(), body));
Some(key)
} else {
None
};
let res: DynMiddlewareStage2 = Box::new(move |req, rpc_req| {
async move {
if !encrypted
&& metadata
.get(&rpc_req.method.as_str(), "authenticated")
.unwrap_or(true)
{
let (res_parts, _) = Response::new(()).into_parts();
Ok(Err(to_response(
&req.headers,
res_parts,
Err(Error::new(
eyre!("Must be encrypted"),
crate::ErrorKind::Authorization,
)
.into()),
|_| StatusCode::OK,
)?))
} else {
let res: DynMiddlewareStage3 = Box::new(move |_, _| {
async move {
let res: DynMiddlewareStage4 = Box::new(move |res| {
async move {
if let Some(key) = key {
res.headers_mut().insert(
"Content-Encoding",
HeaderValue::from_static("aesctr256"),
);
if let Some(len_header) =
res.headers_mut().get_mut("Content-Length")
{
if let Some(len) = len_header
.to_str()
.ok()
.and_then(|l| l.parse::<u64>().ok())
{
*len_header = HeaderValue::from(len + 32);
}
}
let body = std::mem::take(res.body_mut());
*res.body_mut() = Body::wrap_stream(
EncryptStream::new(key.as_ref(), body),
);
}
Ok(())
}
.boxed()
});
Ok(Ok(res))
}
.boxed()
});
Ok(Ok(res))
}
}
.boxed()
});
Ok(Ok(res))
}
.boxed()
},
) )
.unwrap();
let encrypted: EncryptedWire = serde_json::from_str(r#"{
"encrypted": { "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2IiwiYWxnIjoiRUNESC1FUyIsImtpZCI6ImgtZnNXUVh2Tm95dmJEazM5dUNsQ0NUdWc5N3MyZnJockJnWUVBUWVtclUiLCJlcGsiOnsia3R5IjoiRUMiLCJjcnYiOiJQLTI1NiIsIngiOiJmRkF0LXNWYWU2aGNkdWZJeUlmVVdUd3ZvWExaTkdKRHZIWVhIckxwOXNNIiwieSI6IjFvVFN6b00teHlFZC1SLUlBaUFHdXgzS1dJZmNYZHRMQ0JHLUh6MVkzY2sifX0", "iv": "NbwvfvWOdLpZfYRIZUrkcw", "ciphertext": "Zc5Br5kYOlhPkIjQKOLMJw", "tag": "EPoch52lDuCsbUUulzZGfg" }
}"#).unwrap();
assert_eq!(
"testing12345",
&encrypted.decrypt(Arc::new(private_key)).unwrap()
);
} }

View File

@@ -1,4 +1,5 @@
pub mod auth; pub mod auth;
pub mod cors; pub mod cors;
pub mod db;
pub mod diagnostic; pub mod diagnostic;
pub mod encrypt; pub mod encrypt;

7
backend/src/migrate.load Normal file
View File

@@ -0,0 +1,7 @@
load database
from sqlite://{sqlite_path}
into postgresql://root@unix:/var/run/postgresql:5432/secrets
with include no drop, truncate, reset sequences, data only, workers = 1, concurrency = 1, max parallel create index = 1, batch rows = {batch_rows}, prefetch rows = {prefetch_rows}
excluding table names like '_sqlx_migrations', 'notifications';

View File

@@ -8,9 +8,9 @@ use patch_db::HasModel;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tracing::instrument; use tracing::instrument;
use crate::action::ActionImplementation;
use crate::context::RpcContext; use crate::context::RpcContext;
use crate::id::ImageId; use crate::id::ImageId;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId; use crate::s9pk::manifest::PackageId;
use crate::util::Version; use crate::util::Version;
use crate::volume::Volumes; use crate::volume::Volumes;
@@ -19,27 +19,36 @@ use crate::{Error, ResultExt};
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel)] #[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "kebab-case")] #[serde(rename_all = "kebab-case")]
pub struct Migrations { pub struct Migrations {
pub from: IndexMap<VersionRange, ActionImplementation>, pub from: IndexMap<VersionRange, PackageProcedure>,
pub to: IndexMap<VersionRange, ActionImplementation>, pub to: IndexMap<VersionRange, PackageProcedure>,
} }
impl Migrations { impl Migrations {
#[instrument] #[instrument]
pub fn validate(&self, volumes: &Volumes, image_ids: &BTreeSet<ImageId>) -> Result<(), Error> { pub fn validate(
&self,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
for (version, migration) in &self.from { for (version, migration) in &self.from {
migration.validate(volumes, image_ids, true).with_ctx(|_| { migration
( .validate(eos_version, volumes, image_ids, true)
crate::ErrorKind::ValidateS9pk, .with_ctx(|_| {
format!("Migration from {}", version), (
) crate::ErrorKind::ValidateS9pk,
})?; format!("Migration from {}", version),
)
})?;
} }
for (version, migration) in &self.to { for (version, migration) in &self.to {
migration.validate(volumes, image_ids, true).with_ctx(|_| { migration
( .validate(eos_version, volumes, image_ids, true)
crate::ErrorKind::ValidateS9pk, .with_ctx(|_| {
format!("Migration to {}", version), (
) crate::ErrorKind::ValidateS9pk,
})?; format!("Migration to {}", version),
)
})?;
} }
Ok(()) Ok(())
} }
@@ -64,7 +73,7 @@ impl Migrations {
ctx, ctx,
pkg_id, pkg_id,
pkg_version, pkg_version,
Some("Migration"), // Migrations cannot be executed concurrently ProcedureName::Migration, // Migrations cannot be executed concurrently
volumes, volumes,
Some(version), Some(version),
false, false,
@@ -99,7 +108,7 @@ impl Migrations {
ctx, ctx,
pkg_id, pkg_id,
pkg_version, pkg_version,
Some("Migration"), ProcedureName::Migration,
volumes, volumes,
Some(version), Some(version),
false, false,

Some files were not shown because too many files have changed in this diff Show More