Feature/lxc container runtime (#2514)

* wip: static-server errors

* wip: fix wifi

* wip: Fix the service_effects

* wip: Fix cors in the middleware

* wip(chore): Auth clean up the lint.

* wip(fix): Vhost

* wip: continue manager refactor

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* wip: service manager refactor

* wip: Some fixes

* wip(fix): Fix the lib.rs

* wip

* wip(fix): Logs

* wip: bins

* wip(innspect): Add in the inspect

* wip: config

* wip(fix): Diagnostic

* wip(fix): Dependencies

* wip: context

* wip(fix) Sorta auth

* wip: warnings

* wip(fix): registry/admin

* wip(fix) marketplace

* wip(fix) Some more converted and fixed with the linter and config

* wip: Working on the static server

* wip(fix)static server

* wip: Remove some asynnc

* wip: Something about the request and regular rpc

* wip: gut install

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* wip: Convert the static server into the new system

* wip delete file

* test

* wip(fix) vhost does not need the with safe defaults

* wip: Adding in the wifi

* wip: Fix the developer and the verify

* wip: new install flow

Co-authored-by: J H <Blu-J@users.noreply.github.com>

* fix middleware

* wip

* wip: Fix the auth

* wip

* continue service refactor

* feature: Service get_config

* feat: Action

* wip: Fighting the great fight against the borrow checker

* wip: Remove an error in a file that I just need to deel with later

* chore: Add in some more lifetime stuff to the services

* wip: Install fix on lifetime

* cleanup

* wip: Deal with the borrow later

* more cleanup

* resolve borrowchecker errors

* wip(feat): add in the handler for the socket, for now

* wip(feat): Update the service_effect_handler::action

* chore: Add in the changes to make sure the from_service goes to context

* chore: Change the

* refactor service map

* fix references to service map

* fill out restore

* wip: Before I work on the store stuff

* fix backup module

* handle some warnings

* feat: add in the ui components on the rust side

* feature: Update the procedures

* chore: Update the js side of the main and a few of the others

* chore: Update the rpc listener to match the persistant container

* wip: Working on updating some things to have a better name

* wip(feat): Try and get the rpc to return the correct shape?

* lxc wip

* wip(feat): Try and get the rpc to return the correct shape?

* build for container runtime wip

* remove container-init

* fix build

* fix error

* chore: Update to work I suppose

* lxc wip

* remove docker module and feature

* download alpine squashfs automatically

* overlays effect

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* chore: Add the overlay effect

* feat: Add the mounter in the main

* chore: Convert to use the mounts, still need to work with the sandbox

* install fixes

* fix ssl

* fixes from testing

* implement tmpfile for upload

* wip

* misc fixes

* cleanup

* cleanup

* better progress reporting

* progress for sideload

* return real guid

* add devmode script

* fix lxc rootfs path

* fix percentage bar

* fix progress bar styling

* fix build for unstable

* tweaks

* label progress

* tweaks

* update progress more often

* make symlink in rpc_client

* make socket dir

* fix parent path

* add start-cli to container

* add echo and gitInfo commands

* wip: Add the init + errors

* chore: Add in the exit effect for the system

* chore: Change the type to null for failure to parse

* move sigterm timeout to stopping status

* update order

* chore: Update the return type

* remove dbg

* change the map error

* chore: Update the thing to capture id

* chore add some life changes

* chore: Update the loging

* chore: Update the package to run module

* us From for RpcError

* chore: Update to use import instead

* chore: update

* chore: Use require for the backup

* fix a default

* update the type that is wrong

* chore: Update the type of the manifest

* chore: Update to make null

* only symlink if not exists

* get rid of double result

* better debug info for ErrorCollection

* chore: Update effects

* chore: fix

* mount assets and volumes

* add exec instead of spawn

* fix mounting in image

* fix overlay mounts

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* misc fixes

* feat: Fix two

* fix: systemForEmbassy main

* chore: Fix small part of main loop

* chore: Modify the bundle

* merge

* fixMain loop"

* move tsc to makefile

* chore: Update the return types of the health check

* fix client

* chore: Convert the todo to use tsmatches

* add in the fixes for the seen and create the hack to allow demo

* chore: Update to include the systemForStartOs

* chore UPdate to the latest types from the expected outout

* fixes

* fix typo

* Don't emit if failure on tsc

* wip

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* add s9pk api

* add inspection

* add inspect manifest

* newline after display serializable

* fix squashfs in image name

* edit manifest

Co-authored-by: Jade <Blu-J@users.noreply.github.com>

* wait for response on repl

* ignore sig for now

* ignore sig for now

* re-enable sig verification

* fix

* wip

* env and chroot

* add profiling logs

* set uid & gid in squashfs to 100000

* set uid of sqfs to 100000

* fix mksquashfs args

* add env to compat

* fix

* re-add docker feature flag

* fix docker output format being stupid

* here be dragons

* chore: Add in the cross compiling for something

* fix npm link

* extract logs from container on exit

* chore: Update for testing

* add log capture to drop trait

* chore: add in the modifications that I make

* chore: Update small things for no updates

* chore: Update the types of something

* chore: Make main not complain

* idmapped mounts

* idmapped volumes

* re-enable kiosk

* chore: Add in some logging for the new system

* bring in start-sdk

* remove avahi

* chore: Update the deps

* switch to musl

* chore: Update the version of prettier

* chore: Organize'

* chore: Update some of the headers back to the standard of fetch

* fix musl build

* fix idmapped mounts

* fix cross build

* use cross compiler for correct arch

* feat: Add in the faked ssl stuff for the effects

* @dr_bonez Did a solution here

* chore: Something that DrBonez

* chore: up

* wip: We have a working server!!!

* wip

* uninstall

* wip

* tes

---------

Co-authored-by: J H <dragondef@gmail.com>
Co-authored-by: J H <Blu-J@users.noreply.github.com>
Co-authored-by: J H <2364004+Blu-J@users.noreply.github.com>
This commit is contained in:
Aiden McClelland
2024-02-17 11:14:14 -07:00
committed by GitHub
parent 65009e2f69
commit fab13db4b4
326 changed files with 31708 additions and 13987 deletions

2797
core/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,3 @@
[workspace]
members = ["container-init", "helpers", "models", "snapshot-creator", "startos"]
members = ["helpers", "models", "startos"]

View File

@@ -8,9 +8,6 @@
## Structure
- `startos`: This contains the core library for StartOS that supports building `startbox`.
- `container-init` (ignore: deprecated)
- `js-engine`: This contains the library required to build `deno` to support running `.js` maintainer scripts for v0.3
- `snapshot-creator`: This contains a binary used to build `v8` runtime snapshots, required for initializing `start-deno`
- `helpers`: This contains utility functions used across both `startos` and `js-engine`
- `models`: This contains types that are shared across `startos`, `js-engine`, and `helpers`
@@ -24,8 +21,6 @@ several different names for different behaviour:
`startd` and control it similarly to the UI
- `start-sdk`: This is a CLI tool that aids in building and packaging services
you wish to deploy to StartOS
- `start-deno`: This is a CLI tool invoked by startd to run `.js` maintainer scripts for v0.3
- `avahi-alias`: This is a CLI tool invoked by startd to create aliases in `avahi` for mDNS
## Questions

View File

@@ -18,22 +18,22 @@ cd ..
FEATURES="$(echo $ENVIRONMENT | sed 's/-/,/g')"
RUSTFLAGS=""
alias 'rust-gnu-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$(pwd)":/home/rust/src -w /home/rust/src -P start9/rust-arm-cross:aarch64'
alias 'rust-musl-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
if [[ "${ENVIRONMENT}" =~ (^|-)unstable($|-) ]]; then
RUSTFLAGS="--cfg tokio_unstable"
fi
alias 'rust-musl-builder'='docker run $USE_TTY --rm -e "RUSTFLAGS=$RUSTFLAGS" -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(pwd)":/home/rust/src -w /home/rust/src -P messense/rust-musl-cross:$ARCH-musl'
set +e
fail=
echo "FEATURES=\"$FEATURES\""
echo "RUSTFLAGS=\"$RUSTFLAGS\""
if ! rust-gnu-builder sh -c "(cd core && cargo build --release --features avahi-alias,$FEATURES --locked --bin startbox --target=$ARCH-unknown-linux-gnu)"; then
if ! rust-musl-builder sh -c "(cd core && cargo build --release $(if [ -n "$FEATURES" ]; then echo "--features $FEATURES"; fi) --locked --bin startbox --target=$ARCH-unknown-linux-musl)"; then
fail=true
fi
if ! rust-musl-builder sh -c "(cd core && cargo build --release --no-default-features --features container-runtime,$FEATURES --locked --bin containerbox --target=$ARCH-unknown-linux-musl)"; then
fail=true
fi
for ARCH in x86_64 aarch64
do
if ! rust-musl-builder sh -c "(cd core && cargo build --release --locked --bin container-init)"; then
fail=true
fi
done
set -e
cd core

View File

@@ -1,39 +0,0 @@
#!/bin/bash
# Reason for this being is that we need to create a snapshot for the deno runtime. It wants to pull 3 files from build, and during the creation it gets embedded, but for some
# reason during the actual runtime it is looking for them. So this will create a docker in arm that creates the snaphot needed for the arm
cd "$(dirname "${BASH_SOURCE[0]}")"
set -e
shopt -s expand_aliases
if [ -z "$ARCH" ]; then
ARCH=$(uname -m)
fi
USE_TTY=
if tty -s; then
USE_TTY="-it"
fi
alias 'rust-gnu-builder'='docker run $USE_TTY --rm -v "$HOME/.cargo/registry":/usr/local/cargo/registry -v "$(pwd)":/home/rust/src -w /home/rust/src -P start9/rust-arm-cross:aarch64'
echo "Building "
cd ..
rust-gnu-builder sh -c "(cd core/ && cargo build -p snapshot_creator --release --target=${ARCH}-unknown-linux-gnu)"
cd -
if [ "$ARCH" = "aarch64" ]; then
DOCKER_ARCH='arm64/v8'
elif [ "$ARCH" = "x86_64" ]; then
DOCKER_ARCH='amd64'
fi
echo "Creating Arm v8 Snapshot"
docker run $USE_TTY --platform "linux/${DOCKER_ARCH}" --mount type=bind,src=$(pwd),dst=/mnt ubuntu:22.04 /bin/sh -c "cd /mnt && /mnt/target/${ARCH}-unknown-linux-gnu/release/snapshot_creator"
sudo chown -R $USER target
sudo chown -R $USER ~/.cargo
sudo chown $USER JS_SNAPSHOT.bin
sudo chmod 0644 JS_SNAPSHOT.bin
sudo mv -f JS_SNAPSHOT.bin ./js-engine/src/artifacts/JS_SNAPSHOT.${ARCH}.bin

View File

@@ -1,39 +0,0 @@
[package]
name = "container-init"
version = "0.1.0"
edition = "2021"
rust = "1.66"
[features]
dev = []
metal = []
sound = []
unstable = []
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
async-stream = "0.3"
# cgroups-rs = "0.2"
color-eyre = "0.6"
futures = "0.3"
serde = { version = "1", features = ["derive", "rc"] }
serde_json = "1"
helpers = { path = "../helpers" }
imbl = "2"
nix = { version = "0.27", features = ["process", "signal"] }
tokio = { version = "1", features = ["full"] }
tokio-stream = { version = "0.1", features = ["io-util", "sync", "net"] }
tracing = "0.1"
tracing-error = "0.2"
tracing-futures = "0.2"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
yajrc = { version = "*", git = "https://github.com/dr-bonez/yajrc.git", branch = "develop" }
[target.'cfg(target_os = "linux")'.dependencies]
procfs = "0.15"
[profile.test]
opt-level = 3
[profile.dev.package.backtrace]
opt-level = 3

View File

@@ -1,214 +0,0 @@
use nix::unistd::Pid;
use serde::{Deserialize, Serialize, Serializer};
use yajrc::RpcMethod;
/// Know what the process is called
#[derive(Debug, Serialize, Deserialize, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct ProcessId(pub u32);
impl From<ProcessId> for Pid {
fn from(pid: ProcessId) -> Self {
Pid::from_raw(pid.0 as i32)
}
}
impl From<Pid> for ProcessId {
fn from(pid: Pid) -> Self {
ProcessId(pid.as_raw() as u32)
}
}
impl From<i32> for ProcessId {
fn from(pid: i32) -> Self {
ProcessId(pid as u32)
}
}
#[derive(Debug, Serialize, Deserialize, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct ProcessGroupId(pub u32);
#[derive(Debug, Serialize, Deserialize, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[serde(rename_all = "kebab-case")]
pub enum OutputStrategy {
Inherit,
Collect,
}
#[derive(Debug, Clone, Copy)]
pub struct RunCommand;
impl Serialize for RunCommand {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RunCommandParams {
pub gid: Option<ProcessGroupId>,
pub command: String,
pub args: Vec<String>,
pub output: OutputStrategy,
}
impl RpcMethod for RunCommand {
type Params = RunCommandParams;
type Response = ProcessId;
fn as_str<'a>(&'a self) -> &'a str {
"command"
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum LogLevel {
Trace(String),
Warn(String),
Error(String),
Info(String),
Debug(String),
}
impl LogLevel {
pub fn trace(&self) {
match self {
LogLevel::Trace(x) => tracing::trace!("{}", x),
LogLevel::Warn(x) => tracing::warn!("{}", x),
LogLevel::Error(x) => tracing::error!("{}", x),
LogLevel::Info(x) => tracing::info!("{}", x),
LogLevel::Debug(x) => tracing::debug!("{}", x),
}
}
}
#[derive(Debug, Clone, Copy)]
pub struct Log;
impl Serialize for Log {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LogParams {
pub gid: Option<ProcessGroupId>,
pub level: LogLevel,
}
impl RpcMethod for Log {
type Params = LogParams;
type Response = ();
fn as_str<'a>(&'a self) -> &'a str {
"log"
}
}
#[derive(Debug, Clone, Copy)]
pub struct ReadLineStdout;
impl Serialize for ReadLineStdout {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReadLineStdoutParams {
pub pid: ProcessId,
}
impl RpcMethod for ReadLineStdout {
type Params = ReadLineStdoutParams;
type Response = String;
fn as_str<'a>(&'a self) -> &'a str {
"read-line-stdout"
}
}
#[derive(Debug, Clone, Copy)]
pub struct ReadLineStderr;
impl Serialize for ReadLineStderr {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReadLineStderrParams {
pub pid: ProcessId,
}
impl RpcMethod for ReadLineStderr {
type Params = ReadLineStderrParams;
type Response = String;
fn as_str<'a>(&'a self) -> &'a str {
"read-line-stderr"
}
}
#[derive(Debug, Clone, Copy)]
pub struct Output;
impl Serialize for Output {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OutputParams {
pub pid: ProcessId,
}
impl RpcMethod for Output {
type Params = OutputParams;
type Response = String;
fn as_str<'a>(&'a self) -> &'a str {
"output"
}
}
#[derive(Debug, Clone, Copy)]
pub struct SendSignal;
impl Serialize for SendSignal {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SendSignalParams {
pub pid: ProcessId,
pub signal: u32,
}
impl RpcMethod for SendSignal {
type Params = SendSignalParams;
type Response = ();
fn as_str<'a>(&'a self) -> &'a str {
"signal"
}
}
#[derive(Debug, Clone, Copy)]
pub struct SignalGroup;
impl Serialize for SignalGroup {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
Serialize::serialize(Self.as_str(), serializer)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SignalGroupParams {
pub gid: ProcessGroupId,
pub signal: u32,
}
impl RpcMethod for SignalGroup {
type Params = SignalGroupParams;
type Response = ();
fn as_str<'a>(&'a self) -> &'a str {
"signal-group"
}
}

View File

@@ -1,428 +0,0 @@
use std::collections::BTreeMap;
use std::ops::DerefMut;
use std::os::unix::process::ExitStatusExt;
use std::process::Stdio;
use std::sync::Arc;
use container_init::{
LogParams, OutputParams, OutputStrategy, ProcessGroupId, ProcessId, RunCommandParams,
SendSignalParams, SignalGroupParams,
};
use futures::StreamExt;
use helpers::NonDetachingJoinHandle;
use nix::errno::Errno;
use nix::sys::signal::Signal;
use serde::{Deserialize, Serialize};
use serde_json::json;
use tokio::io::{AsyncBufReadExt, AsyncWriteExt, BufReader};
use tokio::process::{Child, Command};
use tokio::select;
use tokio::sync::{watch, Mutex};
use yajrc::{Id, RpcError};
/// Outputs embedded in the JSONRpc output of the executable.
#[derive(Debug, Clone, Serialize)]
#[serde(untagged)]
enum Output {
Command(ProcessId),
ReadLineStdout(String),
ReadLineStderr(String),
Output(String),
Log,
Signal,
SignalGroup,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "method", content = "params", rename_all = "kebab-case")]
enum Input {
/// Run a new command, with the args
Command(RunCommandParams),
/// Want to log locall on the service rather than the eos
Log(LogParams),
// /// Get a line of stdout from the command
// ReadLineStdout(ReadLineStdoutParams),
// /// Get a line of stderr from the command
// ReadLineStderr(ReadLineStderrParams),
/// Get output of command
Output(OutputParams),
/// Send the sigterm to the process
Signal(SendSignalParams),
/// Signal a group of processes
SignalGroup(SignalGroupParams),
}
#[derive(Deserialize)]
struct IncomingRpc {
id: Id,
#[serde(flatten)]
input: Input,
}
struct ChildInfo {
gid: Option<ProcessGroupId>,
child: Arc<Mutex<Option<Child>>>,
output: Option<InheritOutput>,
}
struct InheritOutput {
_thread: NonDetachingJoinHandle<()>,
stdout: watch::Receiver<String>,
stderr: watch::Receiver<String>,
}
struct HandlerMut {
processes: BTreeMap<ProcessId, ChildInfo>,
// groups: BTreeMap<ProcessGroupId, Cgroup>,
}
#[derive(Clone)]
struct Handler {
children: Arc<Mutex<HandlerMut>>,
}
impl Handler {
fn new() -> Self {
Handler {
children: Arc::new(Mutex::new(HandlerMut {
processes: BTreeMap::new(),
// groups: BTreeMap::new(),
})),
}
}
async fn handle(&self, req: Input) -> Result<Output, RpcError> {
Ok(match req {
Input::Command(RunCommandParams {
gid,
command,
args,
output,
}) => Output::Command(self.command(gid, command, args, output).await?),
// Input::ReadLineStdout(ReadLineStdoutParams { pid }) => {
// Output::ReadLineStdout(self.read_line_stdout(pid).await?)
// }
// Input::ReadLineStderr(ReadLineStderrParams { pid }) => {
// Output::ReadLineStderr(self.read_line_stderr(pid).await?)
// }
Input::Log(LogParams { gid: _, level }) => {
level.trace();
Output::Log
}
Input::Output(OutputParams { pid }) => Output::Output(self.output(pid).await?),
Input::Signal(SendSignalParams { pid, signal }) => {
self.signal(pid, signal).await?;
Output::Signal
}
Input::SignalGroup(SignalGroupParams { gid, signal }) => {
self.signal_group(gid, signal).await?;
Output::SignalGroup
}
})
}
async fn command(
&self,
gid: Option<ProcessGroupId>,
command: String,
args: Vec<String>,
output: OutputStrategy,
) -> Result<ProcessId, RpcError> {
let mut cmd = Command::new(command);
cmd.args(args);
cmd.kill_on_drop(true);
cmd.stdout(Stdio::piped());
cmd.stderr(Stdio::piped());
let mut child = cmd.spawn().map_err(|e| {
let mut err = yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!(e.to_string()));
err
})?;
let pid = ProcessId(child.id().ok_or_else(|| {
let mut err = yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!("Child has no pid"));
err
})?);
let output = match output {
OutputStrategy::Inherit => {
let (stdout_send, stdout) = watch::channel(String::new());
let (stderr_send, stderr) = watch::channel(String::new());
if let (Some(child_stdout), Some(child_stderr)) =
(child.stdout.take(), child.stderr.take())
{
Some(InheritOutput {
_thread: tokio::spawn(async move {
tokio::join!(
async {
if let Err(e) = async {
let mut lines = BufReader::new(child_stdout).lines();
while let Some(line) = lines.next_line().await? {
tracing::info!("({}): {}", pid.0, line);
let _ = stdout_send.send(line);
}
Ok::<_, std::io::Error>(())
}
.await
{
tracing::error!(
"Error reading stdout of pid {}: {}",
pid.0,
e
);
}
},
async {
if let Err(e) = async {
let mut lines = BufReader::new(child_stderr).lines();
while let Some(line) = lines.next_line().await? {
tracing::warn!("({}): {}", pid.0, line);
let _ = stderr_send.send(line);
}
Ok::<_, std::io::Error>(())
}
.await
{
tracing::error!(
"Error reading stdout of pid {}: {}",
pid.0,
e
);
}
}
);
})
.into(),
stdout,
stderr,
})
} else {
None
}
}
OutputStrategy::Collect => None,
};
self.children.lock().await.processes.insert(
pid,
ChildInfo {
gid,
child: Arc::new(Mutex::new(Some(child))),
output,
},
);
Ok(pid)
}
async fn output(&self, pid: ProcessId) -> Result<String, RpcError> {
let not_found = || {
let mut err = yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!(format!("Child with pid {} not found", pid.0)));
err
};
let mut child = {
self.children
.lock()
.await
.processes
.get(&pid)
.ok_or_else(not_found)?
.child
.clone()
}
.lock_owned()
.await;
if let Some(child) = child.take() {
let output = child.wait_with_output().await?;
if output.status.success() {
Ok(String::from_utf8(output.stdout).map_err(|_| yajrc::PARSE_ERROR)?)
} else {
Err(RpcError {
code: output
.status
.code()
.or_else(|| output.status.signal().map(|s| 128 + s))
.unwrap_or(0),
message: "Command failed".into(),
data: Some(json!(String::from_utf8(if output.stderr.is_empty() {
output.stdout
} else {
output.stderr
})
.map_err(|_| yajrc::PARSE_ERROR)?)),
})
}
} else {
Err(not_found())
}
}
async fn signal(&self, pid: ProcessId, signal: u32) -> Result<(), RpcError> {
let not_found = || {
let mut err = yajrc::INTERNAL_ERROR.clone();
err.data = Some(json!(format!("Child with pid {} not found", pid.0)));
err
};
Self::killall(pid, Signal::try_from(signal as i32)?)?;
if signal == 9 {
self.children
.lock()
.await
.processes
.remove(&pid)
.ok_or_else(not_found)?;
}
Ok(())
}
async fn signal_group(&self, gid: ProcessGroupId, signal: u32) -> Result<(), RpcError> {
let mut to_kill = Vec::new();
{
let mut children_ref = self.children.lock().await;
let children = std::mem::take(&mut children_ref.deref_mut().processes);
for (pid, child_info) in children {
if child_info.gid == Some(gid) {
to_kill.push(pid);
} else {
children_ref.processes.insert(pid, child_info);
}
}
}
for pid in to_kill {
tracing::info!("Killing pid {}", pid.0);
Self::killall(pid, Signal::try_from(signal as i32)?)?;
}
Ok(())
}
fn killall(pid: ProcessId, signal: Signal) -> Result<(), RpcError> {
for proc in procfs::process::all_processes()? {
let stat = proc?.stat()?;
if ProcessId::from(stat.ppid) == pid {
Self::killall(stat.pid.into(), signal)?;
}
}
if let Err(e) = nix::sys::signal::kill(pid.into(), Some(signal)) {
if e != Errno::ESRCH {
tracing::error!("Failed to kill pid {}: {}", pid.0, e);
}
}
Ok(())
}
async fn graceful_exit(self) {
let kill_all = futures::stream::iter(
std::mem::take(&mut self.children.lock().await.deref_mut().processes).into_iter(),
)
.for_each_concurrent(None, |(pid, child)| async move {
let _ = Self::killall(pid, Signal::SIGTERM);
if let Some(child) = child.child.lock().await.take() {
let _ = child.wait_with_output().await;
}
});
kill_all.await
}
}
#[tokio::main]
async fn main() {
use tokio::signal::unix::{signal, SignalKind};
let mut sigint = signal(SignalKind::interrupt()).unwrap();
let mut sigterm = signal(SignalKind::terminate()).unwrap();
let mut sigquit = signal(SignalKind::quit()).unwrap();
let mut sighangup = signal(SignalKind::hangup()).unwrap();
use tracing_error::ErrorLayer;
use tracing_subscriber::prelude::*;
use tracing_subscriber::{fmt, EnvFilter};
let filter_layer = EnvFilter::new("container_init=debug");
let fmt_layer = fmt::layer().with_target(true);
tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.with(ErrorLayer::default())
.init();
color_eyre::install().unwrap();
let handler = Handler::new();
let handler_thread = async {
let listener = tokio::net::UnixListener::bind("/start9/sockets/rpc.sock")?;
loop {
let (stream, _) = listener.accept().await?;
let (r, w) = stream.into_split();
let mut lines = BufReader::new(r).lines();
let handler = handler.clone();
tokio::spawn(async move {
let w = Arc::new(Mutex::new(w));
while let Some(line) = lines.next_line().await.transpose() {
let handler = handler.clone();
let w = w.clone();
tokio::spawn(async move {
if let Err(e) = async {
let req = serde_json::from_str::<IncomingRpc>(&line?)?;
match handler.handle(req.input).await {
Ok(output) => {
if w.lock().await.write_all(
format!("{}\n", json!({ "id": req.id, "jsonrpc": "2.0", "result": output }))
.as_bytes(),
)
.await.is_err() {
tracing::error!("Error sending to {id:?}", id = req.id);
}
}
Err(e) =>
if w
.lock()
.await
.write_all(
format!("{}\n", json!({ "id": req.id, "jsonrpc": "2.0", "error": e }))
.as_bytes(),
)
.await.is_err() {
tracing::error!("Handle + Error sending to {id:?}", id = req.id);
},
}
Ok::<_, color_eyre::Report>(())
}
.await
{
tracing::error!("Error parsing RPC request: {}", e);
tracing::debug!("{:?}", e);
}
});
}
Ok::<_, std::io::Error>(())
});
}
#[allow(unreachable_code)]
Ok::<_, std::io::Error>(())
};
select! {
res = handler_thread => {
match res {
Ok(()) => tracing::debug!("Done with inputs/outputs"),
Err(e) => {
tracing::error!("Error reading RPC input: {}", e);
tracing::debug!("{:?}", e);
}
}
},
_ = sigint.recv() => {
tracing::debug!("SIGINT");
},
_ = sigterm.recv() => {
tracing::debug!("SIGTERM");
},
_ = sigquit.recv() => {
tracing::debug!("SIGQUIT");
},
_ = sighangup.recv() => {
tracing::debug!("SIGHUP");
}
}
handler.graceful_exit().await;
::std::process::exit(0)
}

View File

@@ -11,9 +11,9 @@ futures = "0.3.28"
lazy_async_pool = "0.3.3"
models = { path = "../models" }
pin-project = "1.1.3"
rpc-toolkit = "0.2.3"
serde = { version = "1.0", features = ["derive", "rc"] }
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }
tokio-stream = { version = "0.1.14", features = ["io-util", "sync"] }
tracing = "0.1.39"
yajrc = { version = "*", git = "https://github.com/dr-bonez/yajrc.git", branch = "develop" }

View File

@@ -11,11 +11,9 @@ use tokio::sync::oneshot;
use tokio::task::{JoinError, JoinHandle, LocalSet};
mod byte_replacement_reader;
mod rpc_client;
mod rsync;
mod script_dir;
pub use byte_replacement_reader::*;
pub use rpc_client::{RpcClient, UnixRpcClient};
pub use rsync::*;
pub use script_dir::*;

View File

@@ -12,7 +12,4 @@ if [ -z "$PLATFORM" ]; then
export PLATFORM=$(uname -m)
fi
cargo install --path=./startos --no-default-features --features=js_engine,sdk,cli --locked
startbox_loc=$(which startbox)
ln -sf $startbox_loc $(dirname $startbox_loc)/start-cli
ln -sf $startbox_loc $(dirname $startbox_loc)/start-sdk
cargo install --path=./startos --no-default-features --features=cli,docker --bin start-cli --locked

View File

@@ -15,6 +15,7 @@ emver = { version = "0.1", git = "https://github.com/Start9Labs/emver-rs.git", f
"serde",
] }
ipnet = "2.8.0"
num_enum = "0.7.1"
openssl = { version = "0.10.57", features = ["vendored"] }
patch-db = { version = "*", path = "../../patch-db/patch-db", features = [
"trace",

View File

@@ -1,14 +1,19 @@
use std::fmt::Display;
use std::fmt::{Debug, Display};
use color_eyre::eyre::eyre;
use num_enum::TryFromPrimitive;
use patch_db::Revision;
use rpc_toolkit::hyper::http::uri::InvalidUri;
use rpc_toolkit::reqwest;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::yajrc::{
RpcError, INVALID_PARAMS_ERROR, INVALID_REQUEST_ERROR, METHOD_NOT_FOUND_ERROR, PARSE_ERROR,
};
use serde::{Deserialize, Serialize};
use crate::InvalidId;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[derive(Debug, Clone, Copy, PartialEq, Eq, TryFromPrimitive)]
#[repr(i32)]
pub enum ErrorKind {
Unknown = 1,
Filesystem = 2,
@@ -81,6 +86,8 @@ pub enum ErrorKind {
CpuSettings = 69,
Firmware = 70,
Timeout = 71,
Lxc = 72,
Cancelled = 73,
}
impl ErrorKind {
pub fn as_str(&self) -> &'static str {
@@ -157,6 +164,8 @@ impl ErrorKind {
CpuSettings => "CPU Settings Error",
Firmware => "Firmware Error",
Timeout => "Timeout Error",
Lxc => "LXC Error",
Cancelled => "Cancelled",
}
}
}
@@ -186,6 +195,17 @@ impl Error {
revision: None,
}
}
pub fn clone_output(&self) -> Self {
Error {
source: ErrorData {
details: format!("{}", self.source),
debug: format!("{:?}", self.source),
}
.into(),
kind: self.kind,
revision: self.revision.clone(),
}
}
}
impl From<InvalidId> for Error {
fn from(err: InvalidId) -> Self {
@@ -300,6 +320,53 @@ impl From<patch_db::value::Error> for Error {
}
}
#[derive(Clone, Deserialize, Serialize)]
pub struct ErrorData {
pub details: String,
pub debug: String,
}
impl Display for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.details, f)
}
}
impl Debug for ErrorData {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&self.debug, f)
}
}
impl std::error::Error for ErrorData {}
impl From<&RpcError> for ErrorData {
fn from(value: &RpcError) -> Self {
Self {
details: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("details")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
debug: value
.data
.as_ref()
.and_then(|d| {
d.as_object()
.and_then(|d| {
d.get("debug")
.and_then(|d| d.as_str().map(|s| s.to_owned()))
})
.or_else(|| d.as_str().map(|s| s.to_owned()))
})
.unwrap_or_else(|| value.message.clone().into_owned()),
}
}
}
impl From<Error> for RpcError {
fn from(e: Error) -> Self {
let mut data_object = serde_json::Map::with_capacity(3);
@@ -318,10 +385,40 @@ impl From<Error> for RpcError {
RpcError {
code: e.kind as i32,
message: e.kind.as_str().into(),
data: Some(data_object.into()),
data: Some(
match serde_json::to_value(&ErrorData {
details: format!("{}", e.source),
debug: format!("{:?}", e.source),
}) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Error serializing revision for Error object: {}", e);
serde_json::Value::Null
}
},
),
}
}
}
impl From<RpcError> for Error {
fn from(e: RpcError) -> Self {
Error::new(
ErrorData::from(&e),
if let Ok(kind) = e.code.try_into() {
kind
} else if e.code == METHOD_NOT_FOUND_ERROR.code {
ErrorKind::NotFound
} else if e.code == PARSE_ERROR.code
|| e.code == INVALID_PARAMS_ERROR.code
|| e.code == INVALID_REQUEST_ERROR.code
{
ErrorKind::Deserialization
} else {
ErrorKind::Unknown
},
)
}
}
#[derive(Debug, Default)]
pub struct ErrorCollection(Vec<Error>);
@@ -377,10 +474,7 @@ where
Self: Sized,
{
fn with_kind(self, kind: ErrorKind) -> Result<T, Error>;
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display + Send + Sync + 'static>(
self,
f: F,
) -> Result<T, Error>;
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error>;
}
impl<T, E> ResultExt<T, E> for Result<T, E>
where
@@ -394,10 +488,7 @@ where
})
}
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display + Send + Sync + 'static>(
self,
f: F,
) -> Result<T, Error> {
fn with_ctx<F: FnOnce(&E) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = color_eyre::eyre::Error::from(e);
@@ -411,6 +502,29 @@ where
})
}
}
impl<T> ResultExt<T, Error> for Result<T, Error> {
fn with_kind(self, kind: ErrorKind) -> Result<T, Error> {
self.map_err(|e| Error {
source: e.source,
kind,
revision: e.revision,
})
}
fn with_ctx<F: FnOnce(&Error) -> (ErrorKind, D), D: Display>(self, f: F) -> Result<T, Error> {
self.map_err(|e| {
let (kind, ctx) = f(&e);
let source = e.source;
let ctx = format!("{}: {}", ctx, source);
let source = source.wrap_err(ctx);
Error {
kind,
source,
revision: e.revision,
}
})
}
}
pub trait OptionExt<T>
where

View File

@@ -1,4 +1,5 @@
use std::fmt::Debug;
use std::path::Path;
use std::str::FromStr;
use serde::{Deserialize, Deserializer, Serialize};
@@ -7,6 +8,11 @@ use crate::{Id, InvalidId, PackageId, Version};
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Serialize)]
pub struct ImageId(Id);
impl AsRef<Path> for ImageId {
fn as_ref(&self) -> &Path {
self.0.as_ref().as_ref()
}
}
impl std::fmt::Display for ImageId {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", &self.0)

View File

@@ -4,54 +4,37 @@ use crate::{ActionId, HealthCheckId, PackageId};
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum ProcedureName {
Main, // Usually just run container
CreateBackup,
RestoreBackup,
StartMain,
StopMain,
GetConfig,
SetConfig,
Migration,
Properties,
LongRunning,
Check(PackageId),
AutoConfig(PackageId),
Health(HealthCheckId),
Action(ActionId),
Signal,
CreateBackup,
RestoreBackup,
ActionMetadata,
RunAction(ActionId),
GetAction(ActionId),
QueryDependency(ActionId),
UpdateDependency(ActionId),
Init,
Uninit,
}
impl ProcedureName {
pub fn docker_name(&self) -> Option<String> {
pub fn js_function_name(&self) -> String {
match self {
ProcedureName::Main => None,
ProcedureName::LongRunning => None,
ProcedureName::CreateBackup => Some("CreateBackup".to_string()),
ProcedureName::RestoreBackup => Some("RestoreBackup".to_string()),
ProcedureName::GetConfig => Some("GetConfig".to_string()),
ProcedureName::SetConfig => Some("SetConfig".to_string()),
ProcedureName::Migration => Some("Migration".to_string()),
ProcedureName::Properties => Some(format!("Properties-{}", rand::random::<u64>())),
ProcedureName::Health(id) => Some(format!("{}Health", id)),
ProcedureName::Action(id) => Some(format!("{}Action", id)),
ProcedureName::Check(_) => None,
ProcedureName::AutoConfig(_) => None,
ProcedureName::Signal => None,
}
}
pub fn js_function_name(&self) -> Option<String> {
match self {
ProcedureName::Main => Some("/main".to_string()),
ProcedureName::LongRunning => None,
ProcedureName::CreateBackup => Some("/createBackup".to_string()),
ProcedureName::RestoreBackup => Some("/restoreBackup".to_string()),
ProcedureName::GetConfig => Some("/getConfig".to_string()),
ProcedureName::SetConfig => Some("/setConfig".to_string()),
ProcedureName::Migration => Some("/migration".to_string()),
ProcedureName::Properties => Some("/properties".to_string()),
ProcedureName::Health(id) => Some(format!("/health/{}", id)),
ProcedureName::Action(id) => Some(format!("/action/{}", id)),
ProcedureName::Check(id) => Some(format!("/dependencies/{}/check", id)),
ProcedureName::AutoConfig(id) => Some(format!("/dependencies/{}/autoConfigure", id)),
ProcedureName::Signal => Some("/handleSignal".to_string()),
ProcedureName::Init => "/init".to_string(),
ProcedureName::Uninit => "/uninit".to_string(),
ProcedureName::StartMain => "/main/start".to_string(),
ProcedureName::StopMain => "/main/stop".to_string(),
ProcedureName::SetConfig => "/config/set".to_string(),
ProcedureName::GetConfig => "/config/get".to_string(),
ProcedureName::CreateBackup => "/backup/create".to_string(),
ProcedureName::RestoreBackup => "/backup/restore".to_string(),
ProcedureName::ActionMetadata => "/actions/metadata".to_string(),
ProcedureName::RunAction(id) => format!("/actions/{}/run", id),
ProcedureName::GetAction(id) => format!("/actions/{}/get", id),
ProcedureName::QueryDependency(id) => format!("/dependencies/{}/query", id),
ProcedureName::UpdateDependency(id) => format!("/dependencies/{}/update", id),
}
}
}

View File

@@ -1,11 +0,0 @@
[package]
name = "snapshot_creator"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
dashmap = "5.3.4"
deno_core = "=0.222.0"
deno_ast = { version = "=0.29.5", features = ["transpiling"] }

View File

@@ -1,11 +0,0 @@
use deno_core::JsRuntimeForSnapshot;
fn main() {
let runtime = JsRuntimeForSnapshot::new(Default::default());
let snapshot = runtime.snapshot();
let snapshot_slice: &[u8] = &*snapshot;
println!("Snapshot size: {}", snapshot_slice.len());
std::fs::write("JS_SNAPSHOT.bin", snapshot_slice).unwrap();
}

View File

@@ -21,20 +21,26 @@ license = "MIT"
name = "startos"
path = "src/lib.rs"
[[bin]]
name = "containerbox"
path = "src/main.rs"
[[bin]]
name = "start-cli"
path = "src/main.rs"
[[bin]]
name = "startbox"
path = "src/main.rs"
[features]
avahi = ["avahi-sys"]
avahi-alias = ["avahi"]
cli = []
container-runtime = []
daemon = []
default = ["cli", "sdk", "daemon"]
default = ["cli", "daemon"]
dev = []
docker = []
sdk = []
unstable = ["console-subscriber", "tokio/tracing"]
docker = []
[dependencies]
aes = { version = "0.7.5", features = ["ctr"] }
@@ -45,9 +51,8 @@ async-compression = { version = "0.4.4", features = [
] }
async-stream = "0.3.5"
async-trait = "0.1.74"
avahi-sys = { git = "https://github.com/Start9Labs/avahi-sys", version = "0.10.0", branch = "feature/dynamic-linking", features = [
"dynamic",
], optional = true }
axum = { version = "0.7.3", features = ["ws"] }
axum-server = "0.6.0"
base32 = "0.4.0"
base64 = "0.21.4"
base64ct = "1.6.0"
@@ -55,7 +60,7 @@ basic-cookies = "0.1.4"
blake3 = "1.5.0"
bytes = "1"
chrono = { version = "0.4.31", features = ["serde"] }
clap = "3.2.25"
clap = "4.4.12"
color-eyre = "0.6.2"
console = "0.15.7"
console-subscriber = { version = "0.2", optional = true }
@@ -72,7 +77,6 @@ ed25519-dalek = { version = "2.0.0", features = [
"digest",
] }
ed25519-dalek-v1 = { package = "ed25519-dalek", version = "1" }
container-init = { path = "../container-init" }
emver = { version = "0.1.7", git = "https://github.com/Start9Labs/emver-rs.git", features = [
"serde",
] }
@@ -82,9 +86,15 @@ gpt = "3.1.0"
helpers = { path = "../helpers" }
hex = "0.4.3"
hmac = "0.12.1"
http = "0.2.9"
hyper = { version = "0.14.27", features = ["full"] }
hyper-ws-listener = "0.3.0"
http = "1.0.0"
# http-body-util = "0.1.0"
# hyper = { version = "1.1.0", features = ["full"] }
# hyper-util = { version = "0.1.2", features = [
# "server",
# "server-auto",
# "tokio",
# ] }
# hyper-ws-listener = "0.3.0"
imbl = "2.0.2"
imbl-value = { git = "https://github.com/Start9Labs/imbl-value.git" }
include_dir = "0.7.3"
@@ -94,11 +104,13 @@ integer-encoding = { version = "4.0.0", features = ["tokio_async"] }
ipnet = { version = "2.8.0", features = ["serde"] }
iprange = { version = "0.6.7", features = ["serde"] }
isocountry = "0.3.2"
itertools = "0.11.0"
itertools = "0.12.0"
jaq-core = "0.10.1"
jaq-std = "0.10.0"
josekit = "0.8.4"
jsonpath_lib = { git = "https://github.com/Start9Labs/jsonpath.git" }
lazy_async_pool = "0.3.3"
lazy_format = "2.0"
lazy_static = "1.4.0"
libc = "0.2.149"
log = "0.4.20"
@@ -109,6 +121,7 @@ nix = { version = "0.27.1", features = ["user", "process", "signal", "fs"] }
nom = "7.1.3"
num = "0.4.1"
num_enum = "0.7.0"
once_cell = "1.19.0"
openssh-keys = "0.6.2"
openssl = { version = "0.10.57", features = ["vendored"] }
p256 = { version = "0.13.2", features = ["pem"] }
@@ -123,12 +136,12 @@ proptest = "1.3.1"
proptest-derive = "0.4.0"
rand = { version = "0.8.5", features = ["std"] }
regex = "1.10.2"
reqwest = { version = "0.11.22", features = ["stream", "json", "socks"] }
reqwest = { version = "0.11.23", features = ["stream", "json", "socks"] }
reqwest_cookie_store = "0.6.0"
rpassword = "7.2.0"
rpc-toolkit = "0.2.2"
rpc-toolkit = { git = "https://github.com/Start9Labs/rpc-toolkit.git", branch = "refactor/traits" }
rust-argon2 = "2.0.0"
scopeguard = "1.1" # because avahi-sys fucks your shit up
rustyline-async = "0.4.1"
semver = { version = "1.0.20", features = ["serde"] }
serde = { version = "1.0", features = ["derive", "rc"] }
serde_cbor = { package = "ciborium", version = "0.2.1" }
@@ -137,6 +150,7 @@ serde_toml = { package = "toml", version = "0.8.2" }
serde_with = { version = "3.4.0", features = ["macros", "json"] }
serde_yaml = "0.9.25"
sha2 = "0.10.2"
shell-words = "1"
simple-logging = "2.0.2"
sqlx = { version = "0.7.2", features = [
"chrono",
@@ -149,11 +163,11 @@ stderrlog = "0.5.4"
tar = "0.4.40"
thiserror = "1.0.49"
tokio = { version = "1", features = ["full"] }
tokio-rustls = "0.24.1"
tokio-rustls = "0.25.0"
tokio-socks = "0.5.1"
tokio-stream = { version = "0.1.14", features = ["io-util", "sync", "net"] }
tokio-tar = { git = "https://github.com/dr-bonez/tokio-tar.git" }
tokio-tungstenite = { version = "0.20.1", features = ["native-tls"] }
tokio-tungstenite = { version = "0.21.0", features = ["native-tls"] }
tokio-util = { version = "0.7.9", features = ["io"] }
torut = "0.2.1"
tracing = "0.1.39"
@@ -162,7 +176,7 @@ tracing-futures = "0.2.5"
tracing-journald = "0.3.0"
tracing-subscriber = { version = "0.3.17", features = ["env-filter"] }
trust-dns-server = "0.23.1"
typed-builder = "0.17.0"
typed-builder = "0.18.0"
url = { version = "2.4.1", features = ["serde"] }
urlencoding = "2.1.3"
uuid = { version = "1.4.1", features = ["v4"] }

View File

@@ -14,9 +14,15 @@ allow = [
"BSD-3-Clause",
"LGPL-3.0",
"OpenSSL",
"Unicode-DFS-2016",
"Zlib",
]
clarify = [
{ name = "webpki", expression = "ISC", license-files = [ { path = "LICENSE", hash = 0x001c7e6c } ] },
{ name = "ring", expression = "OpenSSL", license-files = [ { path = "LICENSE", hash = 0xbd0eed23 } ] },
{ name = "webpki", expression = "ISC", license-files = [
{ path = "LICENSE", hash = 0x001c7e6c },
] },
{ name = "ring", expression = "OpenSSL", license-files = [
{ path = "LICENSE", hash = 0xbd0eed23 },
] },
]

View File

@@ -1,26 +1,14 @@
use std::collections::{BTreeMap, BTreeSet};
use clap::ArgMatches;
use color_eyre::eyre::eyre;
use indexmap::IndexSet;
use clap::Parser;
pub use models::ActionId;
use models::ImageId;
use models::PackageId;
use rpc_toolkit::command;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::config::{Config, ConfigSpec};
use crate::config::Config;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
use crate::util::Version;
use crate::volume::Volumes;
use crate::{Error, ResultExt};
#[derive(Clone, Debug, Default, Deserialize, Serialize)]
pub struct Actions(pub BTreeMap<ActionId, Action>);
use crate::util::serde::{display_serializable, StdinDeserializable, WithIoFormat};
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "version")]
@@ -44,72 +32,11 @@ pub enum DockerStatus {
Stopped,
}
#[derive(Clone, Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct Action {
pub name: String,
pub description: String,
#[serde(default)]
pub warning: Option<String>,
pub implementation: PackageProcedure,
pub allowed_statuses: IndexSet<DockerStatus>,
#[serde(default)]
pub input_spec: ConfigSpec,
}
impl Action {
#[instrument(skip_all)]
pub fn validate(
&self,
_container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.implementation
.validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| {
(
crate::ErrorKind::ValidateS9pk,
format!("Action {}", self.name),
)
})
pub fn display_action_result(params: WithIoFormat<ActionParams>, result: ActionResult) {
if let Some(format) = params.format {
return display_serializable(format, result);
}
#[instrument(skip_all)]
pub async fn execute(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
action_id: &ActionId,
volumes: &Volumes,
input: Option<Config>,
) -> Result<ActionResult, Error> {
if let Some(ref input) = input {
self.input_spec
.matches(&input)
.with_kind(crate::ErrorKind::ConfigSpecViolation)?;
}
self.implementation
.execute(
ctx,
pkg_id,
pkg_version,
ProcedureName::Action(action_id.clone()),
volumes,
input,
None,
)
.await?
.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::Action))
}
}
fn display_action_result(action_result: ActionResult, matches: &ArgMatches) {
if matches.is_present("format") {
return display_serializable(action_result, matches);
}
match action_result {
match result {
ActionResult::V0(ar) => {
println!(
"{}: {}",
@@ -120,44 +47,39 @@ fn display_action_result(action_result: ActionResult, matches: &ArgMatches) {
}
}
#[command(about = "Executes an action", display(display_action_result))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ActionParams {
#[arg(id = "id")]
#[serde(rename = "id")]
pub package_id: PackageId,
#[arg(id = "action-id")]
#[serde(rename = "action-id")]
pub action_id: ActionId,
#[command(flatten)]
pub input: StdinDeserializable<Option<Config>>,
}
// impl C
// #[command(about = "Executes an action", display(display_action_result))]
#[instrument(skip_all)]
pub async fn action(
#[context] ctx: RpcContext,
#[arg(rename = "id")] pkg_id: PackageId,
#[arg(rename = "action-id")] action_id: ActionId,
#[arg(stdin, parse(parse_stdin_deserializable))] input: Option<Config>,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
ctx: RpcContext,
ActionParams {
package_id,
action_id,
input: StdinDeserializable(input),
}: ActionParams,
) -> Result<ActionResult, Error> {
let manifest = ctx
.db
.peek()
ctx.services
.get(&package_id)
.await
.as_ref()
.or_not_found(lazy_format!("Manager for {}", package_id))?
.action(
action_id,
input.map(|c| to_value(&c)).transpose()?.unwrap_or_default(),
)
.await
.as_package_data()
.as_idx(&pkg_id)
.or_not_found(&pkg_id)?
.as_installed()
.or_not_found(&pkg_id)?
.as_manifest()
.de()?;
if let Some(action) = manifest.actions.0.get(&action_id) {
action
.execute(
&ctx,
&manifest.id,
&manifest.version,
&action_id,
&manifest.volumes,
input,
)
.await
} else {
Err(Error::new(
eyre!("Action not found in manifest"),
crate::ErrorKind::NotFound,
))
}
}

View File

@@ -1,24 +1,23 @@
use std::collections::BTreeMap;
use std::marker::PhantomData;
use chrono::{DateTime, Utc};
use clap::ArgMatches;
use clap::{ArgMatches, Parser};
use color_eyre::eyre::eyre;
use imbl_value::{json, InternedString};
use josekit::jwk::Jwk;
use rpc_toolkit::command;
use rpc_toolkit::command_helpers::prelude::{RequestParts, ResponseParts};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, from_fn_async, AnyContext, CallRemote, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use sqlx::{Executor, Postgres};
use tracing::instrument;
use crate::context::{CliContext, RpcContext};
use crate::middleware::auth::{AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken};
use crate::middleware::encrypt::EncryptedWire;
use crate::middleware::auth::{
AsLogoutSessionId, HasLoggedOutSessions, HashSessionToken, LoginRes,
};
use crate::prelude::*;
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat};
use crate::util::crypto::EncryptedWire;
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat};
use crate::{ensure_code, Error, ResultExt};
#[derive(Clone, Serialize, Deserialize)]
#[serde(untagged)]
@@ -61,14 +60,43 @@ impl std::str::FromStr for PasswordType {
})
}
}
#[command(subcommands(login, logout, session, reset_password, get_pubkey))]
pub fn auth() -> Result<(), Error> {
Ok(())
pub fn auth() -> ParentHandler {
ParentHandler::new()
.subcommand(
"login",
from_fn_async(login_impl)
.with_metadata("login", Value::Bool(true))
.no_cli(),
)
.subcommand("login", from_fn_async(cli_login).no_display())
.subcommand(
"logout",
from_fn_async(logout)
.with_metadata("get-session", Value::Bool(true))
.with_remote_cli::<CliContext>()
// TODO @dr-bonez
.no_display(),
)
.subcommand("session", session())
.subcommand(
"reset-password",
from_fn_async(reset_password_impl).no_cli(),
)
.subcommand(
"reset-password",
from_fn_async(cli_reset_password).no_display(),
)
.subcommand(
"get-pubkey",
from_fn_async(get_pubkey)
.with_metadata("authenticated", Value::Bool(false))
.no_display()
.with_remote_cli::<CliContext>(),
)
}
pub fn cli_metadata() -> Value {
serde_json::json!({
imbl_value::json!({
"platforms": ["cli"],
})
}
@@ -89,12 +117,17 @@ fn gen_pwd() {
.unwrap()
)
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct CliLoginParams {
password: Option<PasswordType>,
}
#[instrument(skip_all)]
async fn cli_login(
ctx: CliContext,
password: Option<PasswordType>,
metadata: Value,
CliLoginParams { password }: CliLoginParams,
) -> Result<(), RpcError> {
let password = if let Some(password) = password {
password.decrypt(&ctx)?
@@ -102,14 +135,16 @@ async fn cli_login(
rpassword::prompt_password("Password: ")?
};
rpc_toolkit::command_helpers::call_remote(
ctx,
ctx.call_remote(
"auth.login",
serde_json::json!({ "password": password, "metadata": metadata }),
PhantomData::<()>,
json!({
"password": password,
"metadata": {
"platforms": ["cli"],
},
}),
)
.await?
.result?;
.await?;
Ok(())
}
@@ -140,30 +175,27 @@ where
Ok(())
}
#[command(
custom_cli(cli_login(async, context(CliContext))),
display(display_none),
metadata(authenticated = false)
)]
#[instrument(skip_all)]
pub async fn login(
#[context] ctx: RpcContext,
#[request] req: &RequestParts,
#[response] res: &mut ResponseParts,
#[arg] password: Option<PasswordType>,
#[arg(
parse(parse_metadata),
default = "cli_metadata",
help = "RPC Only: This value cannot be overidden from the cli"
)]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct LoginParams {
password: Option<PasswordType>,
#[arg(skip = cli_metadata())]
#[serde(default)]
metadata: Value,
) -> Result<(), Error> {
}
#[instrument(skip_all)]
pub async fn login_impl(
ctx: RpcContext,
LoginParams { password, metadata }: LoginParams,
) -> Result<LoginRes, Error> {
let password = password.unwrap_or_default().decrypt(&ctx)?;
let mut handle = ctx.secret_store.acquire().await?;
check_password_against_db(handle.as_mut(), &password).await?;
let hash_token = HashSessionToken::new();
let user_agent = req.headers.get("user-agent").and_then(|h| h.to_str().ok());
let user_agent = "".to_string(); // todo!() as String;
let metadata = serde_json::to_string(&metadata).with_kind(crate::ErrorKind::Database)?;
let hash_token_hashed = hash_token.hashed();
sqlx::query!(
@@ -174,25 +206,24 @@ pub async fn login(
)
.execute(handle.as_mut())
.await?;
res.headers.insert(
"set-cookie",
hash_token.header_value()?, // Should be impossible, but don't want to panic
);
Ok(())
Ok(hash_token.to_login_res())
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct LogoutParams {
session: InternedString,
}
#[command(display(display_none), metadata(authenticated = false))]
#[instrument(skip_all)]
pub async fn logout(
#[context] ctx: RpcContext,
#[request] req: &RequestParts,
ctx: RpcContext,
LogoutParams { session }: LogoutParams,
) -> Result<Option<HasLoggedOutSessions>, Error> {
let auth = match HashSessionToken::from_request_parts(req) {
Err(_) => return Ok(None),
Ok(a) => a,
};
Ok(Some(HasLoggedOutSessions::new(vec![auth], &ctx).await?))
Ok(Some(
HasLoggedOutSessions::new(vec![HashSessionToken::from_token(session)], &ctx).await?,
))
}
#[derive(Deserialize, Serialize)]
@@ -211,16 +242,31 @@ pub struct SessionList {
sessions: BTreeMap<String, Session>,
}
#[command(subcommands(list, kill))]
pub async fn session() -> Result<(), Error> {
Ok(())
pub fn session() -> ParentHandler {
ParentHandler::new()
.subcommand(
"list",
from_fn_async(list)
.with_metadata("get-session", Value::Bool(true))
.with_display_serializable()
.with_custom_display_fn::<AnyContext, _>(|handle, result| {
Ok(display_sessions(handle.params, result))
})
.with_remote_cli::<CliContext>(),
)
.subcommand(
"kill",
from_fn_async(kill)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
fn display_sessions(arg: SessionList, matches: &ArgMatches) {
fn display_sessions(params: WithIoFormat<ListParams>, arg: SessionList) {
use prettytable::*;
if matches.is_present("format") {
return display_serializable(arg, matches);
if let Some(format) = params.format {
return display_serializable(format, arg);
}
let mut table = Table::new();
@@ -249,17 +295,22 @@ fn display_sessions(arg: SessionList, matches: &ArgMatches) {
table.print_tty(false).unwrap();
}
#[command(display(display_sessions))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ListParams {
#[arg(skip)]
session: InternedString,
}
// #[command(display(display_sessions))]
#[instrument(skip_all)]
pub async fn list(
#[context] ctx: RpcContext,
#[request] req: &RequestParts,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
ctx: RpcContext,
ListParams { session, .. }: ListParams,
) -> Result<SessionList, Error> {
Ok(SessionList {
current: HashSessionToken::from_request_parts(req)?.as_hash(),
current: HashSessionToken::from_token(session).hashed().to_owned(),
sessions: sqlx::query!(
"SELECT * FROM session WHERE logged_out IS NULL OR logged_out > CURRENT_TIMESTAMP"
)
@@ -287,29 +338,50 @@ fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<Vec<String>, RpcEr
}
#[derive(Debug, Clone, Serialize, Deserialize)]
struct KillSessionId(String);
struct KillSessionId(InternedString);
impl KillSessionId {
fn new(id: String) -> Self {
Self(InternedString::from(id))
}
}
impl AsLogoutSessionId for KillSessionId {
fn as_logout_session_id(self) -> String {
fn as_logout_session_id(self) -> InternedString {
self.0
}
}
#[command(display(display_none))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct KillParams {
ids: Vec<String>,
}
#[instrument(skip_all)]
pub async fn kill(
#[context] ctx: RpcContext,
#[arg(parse(parse_comma_separated))] ids: Vec<String>,
) -> Result<(), Error> {
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId), &ctx).await?;
pub async fn kill(ctx: RpcContext, KillParams { ids }: KillParams) -> Result<(), Error> {
HasLoggedOutSessions::new(ids.into_iter().map(KillSessionId::new), &ctx).await?;
Ok(())
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ResetPasswordParams {
#[arg(name = "old-password")]
old_password: Option<PasswordType>,
#[arg(name = "new-password")]
new_password: Option<PasswordType>,
}
#[instrument(skip_all)]
async fn cli_reset_password(
ctx: CliContext,
old_password: Option<PasswordType>,
new_password: Option<PasswordType>,
ResetPasswordParams {
old_password,
new_password,
}: ResetPasswordParams,
) -> Result<(), RpcError> {
let old_password = if let Some(old_password) = old_password {
old_password.decrypt(&ctx)?
@@ -331,28 +403,22 @@ async fn cli_reset_password(
new_password
};
rpc_toolkit::command_helpers::call_remote(
ctx,
ctx.call_remote(
"auth.reset-password",
serde_json::json!({ "old-password": old_password, "new-password": new_password }),
PhantomData::<()>,
imbl_value::json!({ "old-password": old_password, "new-password": new_password }),
)
.await?
.result?;
.await?;
Ok(())
}
#[command(
rename = "reset-password",
custom_cli(cli_reset_password(async, context(CliContext))),
display(display_none)
)]
#[instrument(skip_all)]
pub async fn reset_password(
#[context] ctx: RpcContext,
#[arg(rename = "old-password")] old_password: Option<PasswordType>,
#[arg(rename = "new-password")] new_password: Option<PasswordType>,
pub async fn reset_password_impl(
ctx: RpcContext,
ResetPasswordParams {
old_password,
new_password,
}: ResetPasswordParams,
) -> Result<(), Error> {
let old_password = old_password.unwrap_or_default().decrypt(&ctx)?;
let new_password = new_password.unwrap_or_default().decrypt(&ctx)?;
@@ -378,13 +444,8 @@ pub async fn reset_password(
.await
}
#[command(
rename = "get-pubkey",
display(display_none),
metadata(authenticated = false)
)]
#[instrument(skip_all)]
pub async fn get_pubkey(#[context] ctx: RpcContext) -> Result<Jwk, RpcError> {
pub async fn get_pubkey(ctx: RpcContext) -> Result<Jwk, RpcError> {
let secret = ctx.as_ref().clone();
let pub_key = secret.to_public_key()?;
Ok(pub_key)

View File

@@ -4,14 +4,13 @@ use std::path::{Path, PathBuf};
use std::sync::Arc;
use chrono::Utc;
use clap::ArgMatches;
use clap::Parser;
use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use imbl::OrdSet;
use models::Version;
use rpc_toolkit::command;
use models::PackageId;
use serde::{Deserialize, Serialize};
use tokio::io::AsyncWriteExt;
use tokio::sync::Mutex;
use tracing::instrument;
use super::target::BackupTargetId;
@@ -21,42 +20,37 @@ use crate::backup::os::OsBackup;
use crate::backup::{BackupReport, ServerBackupReport};
use crate::context::RpcContext;
use crate::db::model::BackupProgress;
use crate::db::package::get_packages;
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::TmpMountGuard;
use crate::manager::BackupReturn;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::notifications::NotificationLevel;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::util::display_none;
use crate::util::io::dir_copy;
use crate::util::serde::IoFormat;
use crate::version::VersionT;
fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<OrdSet<PackageId>, Error> {
arg.split(',')
.map(|s| s.trim().parse::<PackageId>().map_err(Error::from))
.collect()
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct BackupParams {
target_id: BackupTargetId,
#[arg(long = "old-password")]
old_password: Option<crate::auth::PasswordType>,
#[arg(long = "package-ids")]
package_ids: Option<Vec<PackageId>>,
password: crate::auth::PasswordType,
}
#[command(rename = "create", display(display_none))]
#[instrument(skip(ctx, old_password, password))]
pub async fn backup_all(
#[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg(rename = "old-password", long = "old-password")] old_password: Option<
crate::auth::PasswordType,
>,
#[arg(
rename = "package-ids",
long = "package-ids",
parse(parse_comma_separated)
)]
package_ids: Option<OrdSet<PackageId>>,
#[arg] password: crate::auth::PasswordType,
ctx: RpcContext,
BackupParams {
target_id,
old_password,
package_ids,
password,
}: BackupParams,
) -> Result<(), Error> {
let db = ctx.db.peek().await;
let old_password_decrypted = old_password
.as_ref()
.unwrap_or(&password)
@@ -73,20 +67,9 @@ pub async fn backup_all(
)
.await?;
let package_ids = if let Some(ids) = package_ids {
ids.into_iter()
.flat_map(|package_id| {
let version = db
.as_package_data()
.as_idx(&package_id)?
.as_manifest()
.as_version()
.de()
.ok()?;
Some((package_id, version))
})
.collect()
ids.into_iter().collect()
} else {
get_packages(db.clone())?.into_iter().collect()
todo!("all installed packages");
};
if old_password.is_some() {
backup_guard.change_password(&password)?;
@@ -108,10 +91,7 @@ pub async fn backup_all(
attempted: true,
error: None,
},
packages: report
.into_iter()
.map(|((package_id, _), value)| (package_id, value))
.collect(),
packages: report,
},
None,
)
@@ -130,10 +110,7 @@ pub async fn backup_all(
attempted: true,
error: None,
},
packages: report
.into_iter()
.map(|((package_id, _), value)| (package_id, value))
.collect(),
packages: report,
},
None,
)
@@ -178,7 +155,7 @@ pub async fn backup_all(
#[instrument(skip(db, packages))]
async fn assure_backing_up(
db: &PatchDb,
packages: impl IntoIterator<Item = &(PackageId, Version)> + UnwindSafe + Send,
packages: impl IntoIterator<Item = &PackageId> + UnwindSafe + Send,
) -> Result<(), Error> {
db.mutate(|v| {
let backing_up = v
@@ -205,7 +182,7 @@ async fn assure_backing_up(
backing_up.ser(&Some(
packages
.into_iter()
.map(|(x, _)| (x.clone(), BackupProgress { complete: false }))
.map(|x| (x.clone(), BackupProgress { complete: false }))
.collect(),
))?;
Ok(())
@@ -217,62 +194,39 @@ async fn assure_backing_up(
async fn perform_backup(
ctx: &RpcContext,
backup_guard: BackupMountGuard<TmpMountGuard>,
package_ids: &OrdSet<(PackageId, Version)>,
) -> Result<BTreeMap<(PackageId, Version), PackageBackupReport>, Error> {
package_ids: &OrdSet<PackageId>,
) -> Result<BTreeMap<PackageId, PackageBackupReport>, Error> {
let mut backup_report = BTreeMap::new();
let backup_guard = Arc::new(Mutex::new(backup_guard));
let backup_guard = Arc::new(backup_guard);
for package_id in package_ids {
let (response, _report) = match ctx
.managers
.get(package_id)
.await
.ok_or_else(|| Error::new(eyre!("Manager not found"), ErrorKind::InvalidRequest))?
.backup(backup_guard.clone())
.await
{
BackupReturn::Ran { report, res } => (res, report),
BackupReturn::AlreadyRunning(report) => {
backup_report.insert(package_id.clone(), report);
continue;
}
BackupReturn::Error(error) => {
tracing::warn!("Backup thread error");
tracing::debug!("{error:?}");
backup_report.insert(
package_id.clone(),
PackageBackupReport {
error: Some("Backup thread error".to_owned()),
},
);
continue;
}
};
backup_report.insert(
package_id.clone(),
PackageBackupReport {
error: response.as_ref().err().map(|e| e.to_string()),
},
);
if let Ok(pkg_meta) = response {
backup_guard
.lock()
.await
.metadata
.package_backups
.insert(package_id.0.clone(), pkg_meta);
for id in package_ids {
if let Some(service) = &*ctx.services.get(id).await {
backup_report.insert(
id.clone(),
PackageBackupReport {
error: service
.backup(backup_guard.package_backup(id))
.await
.err()
.map(|e| e.to_string()),
},
);
}
}
let mut backup_guard = Arc::try_unwrap(backup_guard).map_err(|_| {
Error::new(
eyre!("leaked reference to BackupMountGuard"),
ErrorKind::Incoherent,
)
})?;
let ui = ctx.db.peek().await.into_ui().de()?;
let mut os_backup_file = AtomicFile::new(
backup_guard.lock().await.as_ref().join("os-backup.cbor"),
None::<PathBuf>,
)
.await
.with_kind(ErrorKind::Filesystem)?;
let mut os_backup_file =
AtomicFile::new(backup_guard.path().join("os-backup.cbor"), None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
os_backup_file
.write_all(&IoFormat::Cbor.to_vec(&OsBackup {
account: ctx.account.read().await.clone(),
@@ -284,11 +238,11 @@ async fn perform_backup(
.await
.with_kind(ErrorKind::Filesystem)?;
let luks_folder_old = backup_guard.lock().await.as_ref().join("luks.old");
let luks_folder_old = backup_guard.path().join("luks.old");
if tokio::fs::metadata(&luks_folder_old).await.is_ok() {
tokio::fs::remove_dir_all(&luks_folder_old).await?;
}
let luks_folder_bak = backup_guard.lock().await.as_ref().join("luks");
let luks_folder_bak = backup_guard.path().join("luks");
if tokio::fs::metadata(&luks_folder_bak).await.is_ok() {
tokio::fs::rename(&luks_folder_bak, &luks_folder_old).await?;
}
@@ -298,14 +252,6 @@ async fn perform_backup(
}
let timestamp = Some(Utc::now());
let mut backup_guard = Arc::try_unwrap(backup_guard)
.map_err(|_err| {
Error::new(
eyre!("Backup guard could not ensure that the others where dropped"),
ErrorKind::Unknown,
)
})?
.into_inner();
backup_guard.unencrypted_metadata.version = crate::version::Current::new().semver().into();
backup_guard.unencrypted_metadata.full = true;

View File

@@ -1,33 +1,16 @@
use std::collections::{BTreeMap, BTreeSet};
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::collections::BTreeMap;
use chrono::{DateTime, Utc};
use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use models::{ImageId, OptionExt};
use models::PackageId;
use reqwest::Url;
use rpc_toolkit::command;
use rpc_toolkit::{from_fn_async, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use tokio::fs::File;
use tokio::io::AsyncWriteExt;
use tracing::instrument;
use self::target::PackageBackupInfo;
use crate::context::RpcContext;
use crate::install::PKG_ARCHIVE_DIR;
use crate::manager::manager_seed::ManagerSeed;
use crate::context::CliContext;
use crate::net::interface::InterfaceId;
use crate::net::keys::Key;
#[allow(unused_imports)]
use crate::prelude::*;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{Base32, Base64, IoFormat};
use crate::util::Version;
use crate::version::{Current, VersionT};
use crate::volume::{backup_dir, Volume, VolumeId, Volumes, BACKUP_DIR};
use crate::{Error, ErrorKind, ResultExt};
use crate::util::serde::{Base32, Base64};
pub mod backup_bulk;
pub mod os;
@@ -51,14 +34,16 @@ pub struct PackageBackupReport {
pub error: Option<String>,
}
#[command(subcommands(backup_bulk::backup_all, target::target))]
pub fn backup() -> Result<(), Error> {
Ok(())
}
#[command(rename = "backup", subcommands(restore::restore_packages_rpc))]
pub fn package_backup() -> Result<(), Error> {
Ok(())
// #[command(subcommands(backup_bulk::backup_all, target::target))]
pub fn backup() -> ParentHandler {
ParentHandler::new()
.subcommand(
"create",
from_fn_async(backup_bulk::backup_all)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand("target", target::target())
}
#[derive(Deserialize, Serialize)]
@@ -70,157 +55,3 @@ struct BackupMetadata {
pub tor_keys: BTreeMap<InterfaceId, Base32<[u8; 64]>>, // DEPRECATED
pub marketplace_url: Option<Url>,
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
pub struct BackupActions {
pub create: PackageProcedure,
pub restore: PackageProcedure,
}
impl BackupActions {
pub fn validate(
&self,
_container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.create
.validate(eos_version, volumes, image_ids, false)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Backup Create"))?;
self.restore
.validate(eos_version, volumes, image_ids, false)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Backup Restore"))?;
Ok(())
}
#[instrument(skip_all)]
pub async fn create(&self, seed: Arc<ManagerSeed>) -> Result<PackageBackupInfo, Error> {
let manifest = &seed.manifest;
let mut volumes = seed.manifest.volumes.to_readonly();
let ctx = &seed.ctx;
let pkg_id = &manifest.id;
let pkg_version = &manifest.version;
volumes.insert(VolumeId::Backup, Volume::Backup { readonly: false });
let backup_dir = backup_dir(&manifest.id);
if tokio::fs::metadata(&backup_dir).await.is_err() {
tokio::fs::create_dir_all(&backup_dir).await?
}
self.create
.execute::<(), NoOutput>(
ctx,
pkg_id,
pkg_version,
ProcedureName::CreateBackup,
&volumes,
None,
None,
)
.await?
.map_err(|e| eyre!("{}", e.1))
.with_kind(crate::ErrorKind::Backup)?;
let (network_keys, tor_keys): (Vec<_>, Vec<_>) =
Key::for_package(&ctx.secret_store, pkg_id)
.await?
.into_iter()
.filter_map(|k| {
let interface = k.interface().map(|(_, i)| i)?;
Some((
(interface.clone(), Base64(k.as_bytes())),
(interface, Base32(k.tor_key().as_bytes())),
))
})
.unzip();
let marketplace_url = ctx
.db
.peek()
.await
.as_package_data()
.as_idx(&pkg_id)
.or_not_found(pkg_id)?
.expect_as_installed()?
.as_installed()
.as_marketplace_url()
.de()?;
let tmp_path = Path::new(BACKUP_DIR)
.join(pkg_id)
.join(format!("{}.s9pk", pkg_id));
let s9pk_path = ctx
.datadir
.join(PKG_ARCHIVE_DIR)
.join(pkg_id)
.join(pkg_version.as_str())
.join(format!("{}.s9pk", pkg_id));
let mut infile = File::open(&s9pk_path).await?;
let mut outfile = AtomicFile::new(&tmp_path, None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
tokio::io::copy(&mut infile, &mut *outfile)
.await
.with_ctx(|_| {
(
crate::ErrorKind::Filesystem,
format!("cp {} -> {}", s9pk_path.display(), tmp_path.display()),
)
})?;
outfile.save().await.with_kind(ErrorKind::Filesystem)?;
let timestamp = Utc::now();
let metadata_path = Path::new(BACKUP_DIR).join(pkg_id).join("metadata.cbor");
let mut outfile = AtomicFile::new(&metadata_path, None::<PathBuf>)
.await
.with_kind(ErrorKind::Filesystem)?;
let network_keys = network_keys.into_iter().collect();
let tor_keys = tor_keys.into_iter().collect();
outfile
.write_all(&IoFormat::Cbor.to_vec(&BackupMetadata {
timestamp,
network_keys,
tor_keys,
marketplace_url,
})?)
.await?;
outfile.save().await.with_kind(ErrorKind::Filesystem)?;
Ok(PackageBackupInfo {
os_version: Current::new().semver().into(),
title: manifest.title.clone(),
version: pkg_version.clone(),
timestamp,
})
}
#[instrument(skip_all)]
pub async fn restore(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
volumes: &Volumes,
) -> Result<Option<Url>, Error> {
let mut volumes = volumes.clone();
volumes.insert(VolumeId::Backup, Volume::Backup { readonly: true });
self.restore
.execute::<(), NoOutput>(
ctx,
pkg_id,
pkg_version,
ProcedureName::RestoreBackup,
&volumes,
None,
None,
)
.await?
.map_err(|e| eyre!("{}", e.1))
.with_kind(crate::ErrorKind::Restore)?;
let metadata_path = Path::new(BACKUP_DIR).join(pkg_id).join("metadata.cbor");
let metadata: BackupMetadata = IoFormat::Cbor.from_slice(
&tokio::fs::read(&metadata_path).await.with_ctx(|_| {
(
crate::ErrorKind::Filesystem,
metadata_path.display().to_string(),
)
})?,
)?;
Ok(metadata.marketplace_url)
}
}

View File

@@ -1,55 +1,46 @@
use std::collections::BTreeMap;
use std::path::Path;
use std::sync::atomic::Ordering;
use std::sync::Arc;
use std::time::Duration;
use clap::ArgMatches;
use futures::future::BoxFuture;
use futures::{stream, FutureExt, StreamExt};
use clap::Parser;
use futures::{stream, StreamExt};
use models::PackageId;
use openssl::x509::X509;
use rpc_toolkit::command;
use sqlx::Connection;
use tokio::fs::File;
use serde::{Deserialize, Serialize};
use torut::onion::OnionAddressV3;
use tracing::instrument;
use super::target::BackupTargetId;
use crate::backup::os::OsBackup;
use crate::backup::BackupMetadata;
use crate::context::rpc::RpcContextConfig;
use crate::context::{RpcContext, SetupContext};
use crate::db::model::{PackageDataEntry, PackageDataEntryRestoring, StaticFiles};
use crate::disk::mount::backup::{BackupMountGuard, PackageBackupMountGuard};
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::hostname::Hostname;
use crate::init::init;
use crate::install::progress::InstallProgress;
use crate::install::{download_install_s9pk, PKG_PUBLIC_DIR};
use crate::notifications::NotificationLevel;
use crate::prelude::*;
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::s9pk::reader::S9pkReader;
use crate::setup::SetupStatus;
use crate::util::display_none;
use crate::util::io::dir_size;
use crate::s9pk::S9pk;
use crate::service::service_map::DownloadInstallFuture;
use crate::util::serde::IoFormat;
use crate::volume::{backup_dir, BACKUP_DIR, PKG_VOLUME_DIR};
fn parse_comma_separated(arg: &str, _: &ArgMatches) -> Result<Vec<PackageId>, Error> {
arg.split(',')
.map(|s| s.trim().parse().map_err(Error::from))
.collect()
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct RestorePackageParams {
pub ids: Vec<PackageId>,
pub target_id: BackupTargetId,
pub password: String,
}
#[command(rename = "restore", display(display_none))]
// TODO dr Why doesn't anything use this
// #[command(rename = "restore", display(display_none))]
#[instrument(skip(ctx, password))]
pub async fn restore_packages_rpc(
#[context] ctx: RpcContext,
#[arg(parse(parse_comma_separated))] ids: Vec<PackageId>,
#[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg] password: String,
ctx: RpcContext,
RestorePackageParams {
ids,
target_id,
password,
}: RestorePackageParams,
) -> Result<(), Error> {
let fs = target_id
.load(ctx.secret_store.acquire().await?.as_mut())
@@ -57,114 +48,25 @@ pub async fn restore_packages_rpc(
let backup_guard =
BackupMountGuard::mount(TmpMountGuard::mount(&fs, ReadWrite).await?, &password).await?;
let (backup_guard, tasks, _) = restore_packages(&ctx, backup_guard, ids).await?;
let tasks = restore_packages(&ctx, backup_guard, ids).await?;
tokio::spawn(async move {
stream::iter(tasks.into_iter().map(|x| (x, ctx.clone())))
.for_each_concurrent(5, |(res, ctx)| async move {
match res.await {
(Ok(_), _) => (),
(Err(err), package_id) => {
if let Err(err) = ctx
.notification_manager
.notify(
ctx.db.clone(),
Some(package_id.clone()),
NotificationLevel::Error,
"Restoration Failure".to_string(),
format!("Error restoring package {}: {}", package_id, err),
(),
None,
)
.await
{
tracing::error!("Failed to notify: {}", err);
tracing::debug!("{:?}", err);
};
tracing::error!("Error restoring package {}: {}", package_id, err);
stream::iter(tasks)
.for_each_concurrent(5, |(id, res)| async move {
match async { res.await?.await }.await {
Ok(_) => (),
Err(err) => {
tracing::error!("Error restoring package {}: {}", id, err);
tracing::debug!("{:?}", err);
}
}
})
.await;
if let Err(e) = backup_guard.unmount().await {
tracing::error!("Error unmounting backup drive: {}", e);
tracing::debug!("{:?}", e);
}
});
Ok(())
}
async fn approximate_progress(
rpc_ctx: &RpcContext,
progress: &mut ProgressInfo,
) -> Result<(), Error> {
for (id, size) in &mut progress.target_volume_size {
let dir = rpc_ctx.datadir.join(PKG_VOLUME_DIR).join(id).join("data");
if tokio::fs::metadata(&dir).await.is_err() {
*size = 0;
} else {
*size = dir_size(&dir, None).await?;
}
}
Ok(())
}
async fn approximate_progress_loop(
ctx: &SetupContext,
rpc_ctx: &RpcContext,
mut starting_info: ProgressInfo,
) {
loop {
if let Err(e) = approximate_progress(rpc_ctx, &mut starting_info).await {
tracing::error!("Failed to approximate restore progress: {}", e);
tracing::debug!("{:?}", e);
} else {
*ctx.setup_status.write().await = Some(Ok(starting_info.flatten()));
}
tokio::time::sleep(Duration::from_secs(1)).await;
}
}
#[derive(Debug, Default)]
struct ProgressInfo {
package_installs: BTreeMap<PackageId, Arc<InstallProgress>>,
src_volume_size: BTreeMap<PackageId, u64>,
target_volume_size: BTreeMap<PackageId, u64>,
}
impl ProgressInfo {
fn flatten(&self) -> SetupStatus {
let mut total_bytes = 0;
let mut bytes_transferred = 0;
for progress in self.package_installs.values() {
total_bytes += ((progress.size.unwrap_or(0) as f64) * 2.2) as u64;
bytes_transferred += progress.downloaded.load(Ordering::SeqCst);
bytes_transferred += ((progress.validated.load(Ordering::SeqCst) as f64) * 0.2) as u64;
bytes_transferred += progress.unpacked.load(Ordering::SeqCst);
}
for size in self.src_volume_size.values() {
total_bytes += *size;
}
for size in self.target_volume_size.values() {
bytes_transferred += *size;
}
if bytes_transferred > total_bytes {
bytes_transferred = total_bytes;
}
SetupStatus {
total_bytes: Some(total_bytes),
bytes_transferred,
complete: false,
}
}
}
#[instrument(skip(ctx))]
pub async fn recover_full_embassy(
ctx: SetupContext,
@@ -179,7 +81,7 @@ pub async fn recover_full_embassy(
)
.await?;
let os_backup_path = backup_guard.as_ref().join("os-backup.cbor");
let os_backup_path = backup_guard.path().join("os-backup.cbor");
let mut os_backup: OsBackup = IoFormat::Cbor.from_slice(
&tokio::fs::read(&os_backup_path)
.await
@@ -199,11 +101,9 @@ pub async fn recover_full_embassy(
secret_store.close().await;
let cfg = RpcContextConfig::load(ctx.config_path.clone()).await?;
init(&ctx.config).await?;
init(&cfg).await?;
let rpc_ctx = RpcContext::init(ctx.config_path.clone(), disk_guid.clone()).await?;
let rpc_ctx = RpcContext::init(&ctx.config, disk_guid.clone()).await?;
let ids: Vec<_> = backup_guard
.metadata
@@ -211,37 +111,19 @@ pub async fn recover_full_embassy(
.keys()
.cloned()
.collect();
let (backup_guard, tasks, progress_info) =
restore_packages(&rpc_ctx, backup_guard, ids).await?;
let task_consumer_rpc_ctx = rpc_ctx.clone();
tokio::select! {
_ = async move {
stream::iter(tasks.into_iter().map(|x| (x, task_consumer_rpc_ctx.clone())))
.for_each_concurrent(5, |(res, ctx)| async move {
match res.await {
(Ok(_), _) => (),
(Err(err), package_id) => {
if let Err(err) = ctx.notification_manager.notify(
ctx.db.clone(),
Some(package_id.clone()),
NotificationLevel::Error,
"Restoration Failure".to_string(), format!("Error restoring package {}: {}", package_id,err), (), None).await{
tracing::error!("Failed to notify: {}", err);
tracing::debug!("{:?}", err);
};
tracing::error!("Error restoring package {}: {}", package_id, err);
tracing::debug!("{:?}", err);
},
}
}).await;
let tasks = restore_packages(&rpc_ctx, backup_guard, ids).await?;
stream::iter(tasks)
.for_each_concurrent(5, |(id, res)| async move {
match async { res.await?.await }.await {
Ok(_) => (),
Err(err) => {
tracing::error!("Error restoring package {}: {}", id, err);
tracing::debug!("{:?}", err);
}
}
})
.await;
} => {
},
_ = approximate_progress_loop(&ctx, &rpc_ctx, progress_info) => unreachable!(concat!(module_path!(), "::approximate_progress_loop should not terminate")),
}
backup_guard.unmount().await?;
rpc_ctx.shutdown().await?;
Ok((
@@ -257,205 +139,25 @@ async fn restore_packages(
ctx: &RpcContext,
backup_guard: BackupMountGuard<TmpMountGuard>,
ids: Vec<PackageId>,
) -> Result<
(
BackupMountGuard<TmpMountGuard>,
Vec<BoxFuture<'static, (Result<(), Error>, PackageId)>>,
ProgressInfo,
),
Error,
> {
let guards = assure_restoring(ctx, ids, &backup_guard).await?;
let mut progress_info = ProgressInfo::default();
let mut tasks = Vec::with_capacity(guards.len());
for (manifest, guard) in guards {
let id = manifest.id.clone();
let (progress, task) = restore_package(ctx.clone(), manifest, guard).await?;
progress_info
.package_installs
.insert(id.clone(), progress.clone());
progress_info
.src_volume_size
.insert(id.clone(), dir_size(backup_dir(&id), None).await?);
progress_info.target_volume_size.insert(id.clone(), 0);
let package_id = id.clone();
tasks.push(
async move {
if let Err(e) = task.await {
tracing::error!("Error restoring package {}: {}", id, e);
tracing::debug!("{:?}", e);
Err(e)
} else {
Ok(())
}
}
.map(|x| (x, package_id))
.boxed(),
);
}
Ok((backup_guard, tasks, progress_info))
}
#[instrument(skip(ctx, backup_guard))]
async fn assure_restoring(
ctx: &RpcContext,
ids: Vec<PackageId>,
backup_guard: &BackupMountGuard<TmpMountGuard>,
) -> Result<Vec<(Manifest, PackageBackupMountGuard)>, Error> {
let mut guards = Vec::with_capacity(ids.len());
let mut insert_packages = BTreeMap::new();
) -> Result<BTreeMap<PackageId, DownloadInstallFuture>, Error> {
let backup_guard = Arc::new(backup_guard);
let mut tasks = BTreeMap::new();
for id in ids {
let peek = ctx.db.peek().await;
let model = peek.as_package_data().as_idx(&id);
if !model.is_none() {
return Err(Error::new(
eyre!("Can't restore over existing package: {}", id),
crate::ErrorKind::InvalidRequest,
));
}
let guard = backup_guard.mount_package_backup(&id).await?;
let s9pk_path = Path::new(BACKUP_DIR).join(&id).join(format!("{}.s9pk", id));
let mut rdr = S9pkReader::open(&s9pk_path, false).await?;
let manifest = rdr.manifest().await?;
let version = manifest.version.clone();
let progress = Arc::new(InstallProgress::new(Some(
tokio::fs::metadata(&s9pk_path).await?.len(),
)));
let public_dir_path = ctx
.datadir
.join(PKG_PUBLIC_DIR)
.join(&id)
.join(version.as_str());
tokio::fs::create_dir_all(&public_dir_path).await?;
let license_path = public_dir_path.join("LICENSE.md");
let mut dst = File::create(&license_path).await?;
tokio::io::copy(&mut rdr.license().await?, &mut dst).await?;
dst.sync_all().await?;
let instructions_path = public_dir_path.join("INSTRUCTIONS.md");
let mut dst = File::create(&instructions_path).await?;
tokio::io::copy(&mut rdr.instructions().await?, &mut dst).await?;
dst.sync_all().await?;
let icon_path = Path::new("icon").with_extension(&manifest.assets.icon_type());
let icon_path = public_dir_path.join(&icon_path);
let mut dst = File::create(&icon_path).await?;
tokio::io::copy(&mut rdr.icon().await?, &mut dst).await?;
dst.sync_all().await?;
insert_packages.insert(
id.clone(),
PackageDataEntry::Restoring(PackageDataEntryRestoring {
install_progress: progress.clone(),
static_files: StaticFiles::local(&id, &version, manifest.assets.icon_type()),
manifest: manifest.clone(),
}),
);
guards.push((manifest, guard));
}
ctx.db
.mutate(|db| {
for (id, package) in insert_packages {
db.as_package_data_mut().insert(&id, &package)?;
}
Ok(())
})
.await?;
Ok(guards)
}
#[instrument(skip(ctx, guard))]
async fn restore_package<'a>(
ctx: RpcContext,
manifest: Manifest,
guard: PackageBackupMountGuard,
) -> Result<(Arc<InstallProgress>, BoxFuture<'static, Result<(), Error>>), Error> {
let id = manifest.id.clone();
let s9pk_path = Path::new(BACKUP_DIR)
.join(&manifest.id)
.join(format!("{}.s9pk", id));
let metadata_path = Path::new(BACKUP_DIR).join(&id).join("metadata.cbor");
let metadata: BackupMetadata = IoFormat::Cbor.from_slice(
&tokio::fs::read(&metadata_path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, metadata_path.display().to_string()))?,
)?;
let mut secrets = ctx.secret_store.acquire().await?;
let mut secrets_tx = secrets.begin().await?;
for (iface, key) in metadata.network_keys {
let k = key.0.as_slice();
sqlx::query!(
"INSERT INTO network_keys (package, interface, key) VALUES ($1, $2, $3) ON CONFLICT (package, interface) DO NOTHING",
id.to_string(),
iface.to_string(),
k,
)
.execute(secrets_tx.as_mut()).await?;
}
// DEPRECATED
for (iface, key) in metadata.tor_keys {
let k = key.0.as_slice();
sqlx::query!(
"INSERT INTO tor (package, interface, key) VALUES ($1, $2, $3) ON CONFLICT (package, interface) DO NOTHING",
id.to_string(),
iface.to_string(),
k,
)
.execute(secrets_tx.as_mut()).await?;
}
secrets_tx.commit().await?;
drop(secrets);
let len = tokio::fs::metadata(&s9pk_path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, s9pk_path.display().to_string()))?
.len();
let file = File::open(&s9pk_path)
.await
.with_ctx(|_| (ErrorKind::Filesystem, s9pk_path.display().to_string()))?;
let progress = InstallProgress::new(Some(len));
let marketplace_url = metadata.marketplace_url;
let progress = Arc::new(progress);
ctx.db
.mutate(|db| {
db.as_package_data_mut().insert(
&id,
&PackageDataEntry::Restoring(PackageDataEntryRestoring {
install_progress: progress.clone(),
static_files: StaticFiles::local(
&id,
&manifest.version,
manifest.assets.icon_type(),
),
manifest: manifest.clone(),
}),
let backup_dir = backup_guard.clone().package_backup(&id);
let task = ctx
.services
.install(
ctx.clone(),
S9pk::open(
backup_dir.path().join(&id).with_extension("s9pk"),
Some(&id),
)
.await?,
Some(backup_dir),
)
})
.await?;
Ok((
progress.clone(),
async move {
download_install_s9pk(ctx, manifest, marketplace_url, progress, file, None).await?;
.await?;
tasks.insert(id, task);
}
guard.unmount().await?;
Ok(())
}
.boxed(),
))
Ok(tasks)
}

View File

@@ -1,19 +1,19 @@
use std::path::{Path, PathBuf};
use clap::Parser;
use color_eyre::eyre::eyre;
use futures::TryStreamExt;
use rpc_toolkit::command;
use rpc_toolkit::{command, from_fn_async, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use sqlx::{Executor, Postgres};
use super::{BackupTarget, BackupTargetId};
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::filesystem::ReadOnly;
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::{recovery_info, EmbassyOsRecoveryInfo};
use crate::prelude::*;
use crate::util::display_none;
use crate::util::serde::KeyVal;
#[derive(Debug, Deserialize, Serialize)]
@@ -26,18 +26,46 @@ pub struct CifsBackupTarget {
embassy_os: Option<EmbassyOsRecoveryInfo>,
}
#[command(subcommands(add, update, remove))]
pub fn cifs() -> Result<(), Error> {
Ok(())
pub fn cifs() -> ParentHandler {
ParentHandler::new()
.subcommand(
"add",
from_fn_async(add)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"update",
from_fn_async(update)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"remove",
from_fn_async(remove)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct AddParams {
pub hostname: String,
pub path: PathBuf,
pub username: String,
pub password: Option<String>,
}
#[command(display(display_none))]
pub async fn add(
#[context] ctx: RpcContext,
#[arg] hostname: String,
#[arg] path: PathBuf,
#[arg] username: String,
#[arg] password: Option<String>,
ctx: RpcContext,
AddParams {
hostname,
path,
username,
password,
}: AddParams,
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
let cifs = Cifs {
hostname,
@@ -46,7 +74,7 @@ pub async fn add(
password,
};
let guard = TmpMountGuard::mount(&cifs, ReadOnly).await?;
let embassy_os = recovery_info(&guard).await?;
let embassy_os = recovery_info(guard.path()).await?;
guard.unmount().await?;
let path_string = Path::new("/").join(&cifs.path).display().to_string();
let id: i32 = sqlx::query!(
@@ -70,14 +98,26 @@ pub async fn add(
})
}
#[command(display(display_none))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct UpdateParams {
pub id: BackupTargetId,
pub hostname: String,
pub path: PathBuf,
pub username: String,
pub password: Option<String>,
}
pub async fn update(
#[context] ctx: RpcContext,
#[arg] id: BackupTargetId,
#[arg] hostname: String,
#[arg] path: PathBuf,
#[arg] username: String,
#[arg] password: Option<String>,
ctx: RpcContext,
UpdateParams {
id,
hostname,
path,
username,
password,
}: UpdateParams,
) -> Result<KeyVal<BackupTargetId, BackupTarget>, Error> {
let id = if let BackupTargetId::Cifs { id } = id {
id
@@ -94,7 +134,7 @@ pub async fn update(
password,
};
let guard = TmpMountGuard::mount(&cifs, ReadOnly).await?;
let embassy_os = recovery_info(&guard).await?;
let embassy_os = recovery_info(guard.path()).await?;
guard.unmount().await?;
let path_string = Path::new("/").join(&cifs.path).display().to_string();
if sqlx::query!(
@@ -127,8 +167,14 @@ pub async fn update(
})
}
#[command(display(display_none))]
pub async fn remove(#[context] ctx: RpcContext, #[arg] id: BackupTargetId) -> Result<(), Error> {
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct RemoveParams {
pub id: BackupTargetId,
}
pub async fn remove(ctx: RpcContext, RemoveParams { id }: RemoveParams) -> Result<(), Error> {
let id = if let BackupTargetId::Cifs { id } = id {
id
} else {
@@ -189,7 +235,7 @@ where
};
let embassy_os = async {
let guard = TmpMountGuard::mount(&mount_info, ReadOnly).await?;
let embassy_os = recovery_info(&guard).await?;
let embassy_os = recovery_info(guard.path()).await?;
guard.unmount().await?;
Ok::<_, Error>(embassy_os)
}

View File

@@ -1,13 +1,14 @@
use std::collections::BTreeMap;
use std::path::{Path, PathBuf};
use async_trait::async_trait;
use chrono::{DateTime, Utc};
use clap::ArgMatches;
use clap::builder::ValueParserFactory;
use clap::Parser;
use color_eyre::eyre::eyre;
use digest::generic_array::GenericArray;
use digest::OutputSizeUser;
use rpc_toolkit::command;
use models::PackageId;
use rpc_toolkit::{command, from_fn_async, AnyContext, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use sqlx::{Executor, Postgres};
@@ -15,17 +16,19 @@ use tokio::sync::Mutex;
use tracing::instrument;
use self::cifs::CifsBackupTarget;
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::cifs::Cifs;
use crate::disk::mount::filesystem::{FileSystem, MountType, ReadWrite};
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::util::PartitionInfo;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{deserialize_from_str, display_serializable, serialize_display};
use crate::util::{display_none, Version};
use crate::util::clap::FromStrParser;
use crate::util::serde::{
deserialize_from_str, display_serializable, serialize_display, HandlerExtSerde, WithIoFormat,
};
use crate::util::Version;
pub mod cifs;
@@ -84,6 +87,12 @@ impl std::str::FromStr for BackupTargetId {
}
}
}
impl ValueParserFactory for BackupTargetId {
type Parser = FromStrParser<Self>;
fn value_parser() -> Self::Parser {
FromStrParser::new()
}
}
impl<'de> Deserialize<'de> for BackupTargetId {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
@@ -108,9 +117,8 @@ pub enum BackupTargetFS {
Disk(BlockDev<PathBuf>),
Cifs(Cifs),
}
#[async_trait]
impl FileSystem for BackupTargetFS {
async fn mount<P: AsRef<Path> + Send + Sync>(
async fn mount<P: AsRef<Path> + Send>(
&self,
mountpoint: P,
mount_type: MountType,
@@ -130,15 +138,29 @@ impl FileSystem for BackupTargetFS {
}
}
#[command(subcommands(cifs::cifs, list, info, mount, umount))]
pub fn target() -> Result<(), Error> {
Ok(())
// #[command(subcommands(cifs::cifs, list, info, mount, umount))]
pub fn target() -> ParentHandler {
ParentHandler::new()
.subcommand("cifs", cifs::cifs())
.subcommand(
"list",
from_fn_async(list)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"info",
from_fn_async(info)
.with_display_serializable()
.with_custom_display_fn::<AnyContext, _>(|params, info| {
Ok(display_backup_info(params.params, info))
})
.with_remote_cli::<CliContext>(),
)
}
#[command(display(display_serializable))]
pub async fn list(
#[context] ctx: RpcContext,
) -> Result<BTreeMap<BackupTargetId, BackupTarget>, Error> {
// #[command(display(display_serializable))]
pub async fn list(ctx: RpcContext) -> Result<BTreeMap<BackupTargetId, BackupTarget>, Error> {
let mut sql_handle = ctx.secret_store.acquire().await?;
let (disks_res, cifs) = tokio::try_join!(
crate::disk::util::list(&ctx.os_partitions),
@@ -187,11 +209,11 @@ pub struct PackageBackupInfo {
pub timestamp: DateTime<Utc>,
}
fn display_backup_info(info: BackupInfo, matches: &ArgMatches) {
fn display_backup_info(params: WithIoFormat<InfoParams>, info: BackupInfo) {
use prettytable::*;
if matches.is_present("format") {
return display_serializable(info, matches);
if let Some(format) = params.format {
return display_serializable(format, info);
}
let mut table = Table::new();
@@ -223,12 +245,21 @@ fn display_backup_info(info: BackupInfo, matches: &ArgMatches) {
table.print_tty(false).unwrap();
}
#[command(display(display_backup_info))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct InfoParams {
target_id: BackupTargetId,
password: String,
}
#[instrument(skip(ctx, password))]
pub async fn info(
#[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg] password: String,
ctx: RpcContext,
InfoParams {
target_id,
password,
}: InfoParams,
) -> Result<BackupInfo, Error> {
let guard = BackupMountGuard::mount(
TmpMountGuard::mount(
@@ -254,17 +285,26 @@ lazy_static::lazy_static! {
Mutex::new(BTreeMap::new());
}
#[command]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct MountParams {
target_id: BackupTargetId,
password: String,
}
#[instrument(skip_all)]
pub async fn mount(
#[context] ctx: RpcContext,
#[arg(rename = "target-id")] target_id: BackupTargetId,
#[arg] password: String,
ctx: RpcContext,
MountParams {
target_id,
password,
}: MountParams,
) -> Result<String, Error> {
let mut mounts = USER_MOUNTS.lock().await;
if let Some(existing) = mounts.get(&target_id) {
return Ok(existing.as_ref().display().to_string());
return Ok(existing.path().display().to_string());
}
let guard = BackupMountGuard::mount(
@@ -280,19 +320,23 @@ pub async fn mount(
)
.await?;
let res = guard.as_ref().display().to_string();
let res = guard.path().display().to_string();
mounts.insert(target_id, guard);
Ok(res)
}
#[command(display(display_none))]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct UmountParams {
target_id: Option<BackupTargetId>,
}
#[instrument(skip_all)]
pub async fn umount(
#[context] _ctx: RpcContext,
#[arg(rename = "target-id")] target_id: Option<BackupTargetId>,
) -> Result<(), Error> {
let mut mounts = USER_MOUNTS.lock().await;
pub async fn umount(_: RpcContext, UmountParams { target_id }: UmountParams) -> Result<(), Error> {
let mut mounts = USER_MOUNTS.lock().await; // TODO: move to context
if let Some(target_id) = target_id {
if let Some(existing) = mounts.remove(&target_id) {
existing.unmount().await?;

View File

@@ -1,163 +0,0 @@
use avahi_sys::{
self, avahi_client_errno, avahi_entry_group_add_service, avahi_entry_group_commit,
avahi_strerror, AvahiClient,
};
fn log_str_error(action: &str, e: i32) {
unsafe {
let e_str = avahi_strerror(e);
eprintln!(
"Could not {}: {:?}",
action,
std::ffi::CStr::from_ptr(e_str)
);
}
}
pub fn main() {
let aliases: Vec<_> = std::env::args().skip(1).collect();
unsafe {
let simple_poll = avahi_sys::avahi_simple_poll_new();
let poll = avahi_sys::avahi_simple_poll_get(simple_poll);
let mut box_err = Box::pin(0 as i32);
let err_c: *mut i32 = box_err.as_mut().get_mut();
let avahi_client = avahi_sys::avahi_client_new(
poll,
avahi_sys::AvahiClientFlags::AVAHI_CLIENT_NO_FAIL,
Some(client_callback),
std::ptr::null_mut(),
err_c,
);
if avahi_client == std::ptr::null_mut::<AvahiClient>() {
log_str_error("create Avahi client", *box_err);
panic!("Failed to create Avahi Client");
}
let group = avahi_sys::avahi_entry_group_new(
avahi_client,
Some(entry_group_callback),
std::ptr::null_mut(),
);
if group == std::ptr::null_mut() {
log_str_error("create Avahi entry group", avahi_client_errno(avahi_client));
panic!("Failed to create Avahi Entry Group");
}
let mut hostname_buf = vec![0];
let hostname_raw = avahi_sys::avahi_client_get_host_name_fqdn(avahi_client);
hostname_buf.extend_from_slice(std::ffi::CStr::from_ptr(hostname_raw).to_bytes_with_nul());
let buflen = hostname_buf.len();
debug_assert!(hostname_buf.ends_with(b".local\0"));
debug_assert!(!hostname_buf[..(buflen - 7)].contains(&b'.'));
// assume fixed length prefix on hostname due to local address
hostname_buf[0] = (buflen - 8) as u8; // set the prefix length to len - 8 (leading byte, .local, nul) for the main address
hostname_buf[buflen - 7] = 5; // set the prefix length to 5 for "local"
let mut res;
let http_tcp_cstr =
std::ffi::CString::new("_http._tcp").expect("Could not cast _http._tcp to c string");
res = avahi_entry_group_add_service(
group,
avahi_sys::AVAHI_IF_UNSPEC,
avahi_sys::AVAHI_PROTO_UNSPEC,
avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_USE_MULTICAST,
hostname_raw,
http_tcp_cstr.as_ptr(),
std::ptr::null(),
std::ptr::null(),
443,
// below is a secret final argument that the type signature of this function does not tell you that it
// needs. This is because the C lib function takes a variable number of final arguments indicating the
// desired TXT records to add to this service entry. The way it decides when to stop taking arguments
// from the stack and dereferencing them is when it finds a null pointer...because fuck you, that's why.
// The consequence of this is that forgetting this last argument will cause segfaults or other undefined
// behavior. Welcome back to the stone age motherfucker.
std::ptr::null::<libc::c_char>(),
);
if res < avahi_sys::AVAHI_OK {
log_str_error("add service to Avahi entry group", res);
panic!("Failed to load Avahi services");
}
eprintln!("Published {:?}", std::ffi::CStr::from_ptr(hostname_raw));
for alias in aliases {
let lan_address = alias + ".local";
let lan_address_ptr = std::ffi::CString::new(lan_address)
.expect("Could not cast lan address to c string");
res = avahi_sys::avahi_entry_group_add_record(
group,
avahi_sys::AVAHI_IF_UNSPEC,
avahi_sys::AVAHI_PROTO_UNSPEC,
avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_USE_MULTICAST
| avahi_sys::AvahiPublishFlags_AVAHI_PUBLISH_ALLOW_MULTIPLE,
lan_address_ptr.as_ptr(),
avahi_sys::AVAHI_DNS_CLASS_IN as u16,
avahi_sys::AVAHI_DNS_TYPE_CNAME as u16,
avahi_sys::AVAHI_DEFAULT_TTL,
hostname_buf.as_ptr().cast(),
hostname_buf.len(),
);
if res < avahi_sys::AVAHI_OK {
log_str_error("add CNAME record to Avahi entry group", res);
panic!("Failed to load Avahi services");
}
eprintln!("Published {:?}", lan_address_ptr);
}
let commit_err = avahi_entry_group_commit(group);
if commit_err < avahi_sys::AVAHI_OK {
log_str_error("reset Avahi entry group", commit_err);
panic!("Failed to load Avahi services: reset");
}
}
std::thread::park()
}
unsafe extern "C" fn entry_group_callback(
_group: *mut avahi_sys::AvahiEntryGroup,
state: avahi_sys::AvahiEntryGroupState,
_userdata: *mut core::ffi::c_void,
) {
match state {
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_FAILURE => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_FAILURE");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_COLLISION => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_COLLISION");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_UNCOMMITED => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_UNCOMMITED");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_ESTABLISHED => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_ESTABLISHED");
}
avahi_sys::AvahiEntryGroupState_AVAHI_ENTRY_GROUP_REGISTERING => {
eprintln!("AvahiCallback: EntryGroupState = AVAHI_ENTRY_GROUP_REGISTERING");
}
other => {
eprintln!("AvahiCallback: EntryGroupState = {}", other);
}
}
}
unsafe extern "C" fn client_callback(
_group: *mut avahi_sys::AvahiClient,
state: avahi_sys::AvahiClientState,
_userdata: *mut core::ffi::c_void,
) {
match state {
avahi_sys::AvahiClientState_AVAHI_CLIENT_FAILURE => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_FAILURE");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_RUNNING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_RUNNING");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_CONNECTING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_CONNECTING");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_COLLISION => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_COLLISION");
}
avahi_sys::AvahiClientState_AVAHI_CLIENT_S_REGISTERING => {
eprintln!("AvahiCallback: ClientState = AVAHI_CLIENT_S_REGISTERING");
}
other => {
eprintln!("AvahiCallback: ClientState = {}", other);
}
}
}

View File

@@ -0,0 +1,38 @@
use std::ffi::OsString;
use rpc_toolkit::CliApp;
use serde_json::Value;
use crate::service::cli::{ContainerCliContext, ContainerClientConfig};
use crate::util::logger::EmbassyLogger;
use crate::version::{Current, VersionT};
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
pub fn main(args: impl IntoIterator<Item = OsString>) {
EmbassyLogger::init();
if let Err(e) = CliApp::new(
|cfg: ContainerClientConfig| Ok(ContainerCliContext::init(cfg)),
crate::service::service_effect_handler::service_effect_handler(),
)
.run(args)
{
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => {
if let Some(Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
}

View File

@@ -1,45 +1,54 @@
use std::collections::VecDeque;
use std::ffi::OsString;
use std::path::Path;
#[cfg(feature = "avahi-alias")]
pub mod avahi_alias;
#[cfg(feature = "container-runtime")]
pub mod container_cli;
pub mod deprecated;
#[cfg(feature = "cli")]
pub mod start_cli;
#[cfg(feature = "daemon")]
pub mod start_init;
#[cfg(feature = "sdk")]
pub mod start_sdk;
#[cfg(feature = "daemon")]
pub mod startd;
fn select_executable(name: &str) -> Option<fn()> {
fn select_executable(name: &str) -> Option<fn(VecDeque<OsString>)> {
match name {
#[cfg(feature = "avahi-alias")]
"avahi-alias" => Some(avahi_alias::main),
#[cfg(feature = "cli")]
"start-cli" => Some(start_cli::main),
#[cfg(feature = "sdk")]
"start-sdk" => Some(start_sdk::main),
#[cfg(feature = "container-runtime")]
"start-cli" => Some(container_cli::main),
#[cfg(feature = "daemon")]
"startd" => Some(startd::main),
"embassy-cli" => Some(|| deprecated::renamed("embassy-cli", "start-cli")),
"embassy-sdk" => Some(|| deprecated::renamed("embassy-sdk", "start-sdk")),
"embassyd" => Some(|| deprecated::renamed("embassyd", "startd")),
"embassy-init" => Some(|| deprecated::removed("embassy-init")),
"embassy-cli" => Some(|_| deprecated::renamed("embassy-cli", "start-cli")),
"embassy-sdk" => Some(|_| deprecated::renamed("embassy-sdk", "start-sdk")),
"embassyd" => Some(|_| deprecated::renamed("embassyd", "startd")),
"embassy-init" => Some(|_| deprecated::removed("embassy-init")),
_ => None,
}
}
pub fn startbox() {
let args = std::env::args().take(2).collect::<Vec<_>>();
let executable = args
.get(0)
.and_then(|s| Path::new(&*s).file_name())
.and_then(|s| s.to_str());
if let Some(x) = executable.and_then(|s| select_executable(&s)) {
x()
} else {
eprintln!("unknown executable: {}", executable.unwrap_or("N/A"));
std::process::exit(1);
let mut args = std::env::args_os().collect::<VecDeque<_>>();
for _ in 0..2 {
if let Some(s) = args.pop_front() {
if let Some(x) = Path::new(&*s)
.file_name()
.and_then(|s| s.to_str())
.and_then(|s| select_executable(&s))
{
args.push_front(s);
return x(args);
}
}
}
let args = std::env::args().collect::<VecDeque<_>>();
eprintln!(
"unknown executable: {}",
args.get(1)
.or_else(|| args.get(0))
.map(|s| s.as_str())
.unwrap_or("N/A")
);
std::process::exit(1);
}

View File

@@ -1,62 +1,39 @@
use clap::Arg;
use rpc_toolkit::run_cli;
use rpc_toolkit::yajrc::RpcError;
use std::ffi::OsString;
use rpc_toolkit::CliApp;
use serde_json::Value;
use crate::context::config::ClientConfig;
use crate::context::CliContext;
use crate::util::logger::EmbassyLogger;
use crate::version::{Current, VersionT};
use crate::Error;
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
fn inner_main() -> Result<(), Error> {
run_cli!({
command: crate::main_api,
app: app => app
.name("StartOS CLI")
.version(&**VERSION_STRING)
.arg(
clap::Arg::with_name("config")
.short('c')
.long("config")
.takes_value(true),
)
.arg(Arg::with_name("host").long("host").short('h').takes_value(true))
.arg(Arg::with_name("proxy").long("proxy").short('p').takes_value(true)),
context: matches => {
EmbassyLogger::init();
CliContext::init(matches)?
},
exit: |e: RpcError| {
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => if let Some(Value::String(s)) = o.get("details") {
pub fn main(args: impl IntoIterator<Item = OsString>) {
EmbassyLogger::init();
if let Err(e) = CliApp::new(
|cfg: ClientConfig| Ok(CliContext::init(cfg.load()?)?),
crate::main_api(),
)
.run(args)
{
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => {
if let Some(Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
});
Ok(())
}
pub fn main() {
match inner_main() {
Ok(_) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
std::process::exit(e.code);
}
}

View File

@@ -1,142 +0,0 @@
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, run_cli, Context};
use serde_json::Value;
use crate::procedure::js_scripts::ExecuteArgs;
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
use crate::version::{Current, VersionT};
use crate::Error;
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
struct DenoContext;
impl Context for DenoContext {}
#[command(subcommands(execute, sandbox))]
fn deno_api() -> Result<(), Error> {
Ok(())
}
#[command(cli_only, display(display_serializable))]
async fn execute(
#[arg(stdin, parse(parse_stdin_deserializable))] arg: ExecuteArgs,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<Result<Value, (i32, String)>, Error> {
let ExecuteArgs {
procedure,
directory,
pkg_id,
pkg_version,
name,
volumes,
input,
} = arg;
PackageLogger::init(&pkg_id);
// procedure
// .execute_impl(&directory, &pkg_id, &pkg_version, name, &volumes, input)
// .await
todo!("@DRB Remove")
}
#[command(cli_only, display(display_serializable))]
async fn sandbox(
#[arg(stdin, parse(parse_stdin_deserializable))] arg: ExecuteArgs,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<Result<Value, (i32, String)>, Error> {
let ExecuteArgs {
procedure,
directory,
pkg_id,
pkg_version,
name,
volumes,
input,
} = arg;
PackageLogger::init(&pkg_id);
// procedure
// .sandboxed_impl(&directory, &pkg_id, &pkg_version, &volumes, input, name)
// .await
todo!("@DRB Remove")
}
use tracing::Subscriber;
use tracing_subscriber::util::SubscriberInitExt;
#[derive(Clone)]
struct PackageLogger {}
impl PackageLogger {
fn base_subscriber(id: &PackageId) -> impl Subscriber {
use tracing_error::ErrorLayer;
use tracing_subscriber::prelude::*;
use tracing_subscriber::{fmt, EnvFilter};
let filter_layer = EnvFilter::default().add_directive(
format!("{}=warn", std::module_path!().split("::").next().unwrap())
.parse()
.unwrap(),
);
let fmt_layer = fmt::layer().with_writer(std::io::stderr).with_target(true);
let journald_layer = tracing_journald::layer()
.unwrap()
.with_syslog_identifier(format!("{id}.embassy"));
let sub = tracing_subscriber::registry()
.with(filter_layer)
.with(fmt_layer)
.with(journald_layer)
.with(ErrorLayer::default());
sub
}
pub fn init(id: &PackageId) -> Self {
Self::base_subscriber(id).init();
color_eyre::install().unwrap_or_else(|_| tracing::warn!("tracing too many times"));
Self {}
}
}
fn inner_main() -> Result<(), Error> {
run_cli!({
command: deno_api,
app: app => app
.name("StartOS Deno Executor")
.version(&**VERSION_STRING),
context: _m => DenoContext,
exit: |e: RpcError| {
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => if let Some(Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
});
Ok(())
}
pub fn main() {
match inner_main() {
Ok(_) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
}
}

View File

@@ -1,5 +1,5 @@
use std::net::{Ipv6Addr, SocketAddr};
use std::path::{Path, PathBuf};
use std::path::Path;
use std::sync::Arc;
use std::time::Duration;
@@ -7,7 +7,7 @@ use helpers::NonDetachingJoinHandle;
use tokio::process::Command;
use tracing::instrument;
use crate::context::rpc::RpcContextConfig;
use crate::context::config::ServerConfig;
use crate::context::{DiagnosticContext, InstallContext, SetupContext};
use crate::disk::fsck::{RepairStrategy, RequiresReboot};
use crate::disk::main::DEFAULT_PASSWORD;
@@ -21,7 +21,7 @@ use crate::util::Invoke;
use crate::{Error, ErrorKind, ResultExt, PLATFORM};
#[instrument(skip_all)]
async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
async fn setup_or_init(config: &ServerConfig) -> Result<Option<Shutdown>, Error> {
let song = NonDetachingJoinHandle::from(tokio::spawn(async {
loop {
BEP.play().await.unwrap();
@@ -82,13 +82,12 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Er
.invoke(crate::ErrorKind::OpenSsh)
.await?;
let ctx = InstallContext::init(cfg_path).await?;
let ctx = InstallContext::init().await?;
let server = WebServer::install(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
ctx.clone(),
)
.await?;
)?;
drop(song);
tokio::time::sleep(Duration::from_secs(1)).await; // let the record state that I hate this
@@ -109,26 +108,24 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Er
.await
.is_err()
{
let ctx = SetupContext::init(cfg_path).await?;
let ctx = SetupContext::init(config)?;
let server = WebServer::setup(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
ctx.clone(),
)
.await?;
)?;
drop(song);
tokio::time::sleep(Duration::from_secs(1)).await; // let the record state that I hate this
CHIME.play().await?;
ctx.shutdown
.subscribe()
.recv()
.await
.expect("context dropped");
let mut shutdown = ctx.shutdown.subscribe();
shutdown.recv().await.expect("context dropped");
server.shutdown().await;
drop(shutdown);
tokio::task::yield_now().await;
if let Err(e) = Command::new("killall")
.arg("firefox-esr")
@@ -139,13 +136,12 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Er
tracing::debug!("{:?}", e);
}
} else {
let cfg = RpcContextConfig::load(cfg_path).await?;
let guid_string = tokio::fs::read_to_string("/media/embassy/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await?;
let guid = guid_string.trim();
let requires_reboot = crate::disk::main::import(
guid,
cfg.datadir(),
config.datadir(),
if tokio::fs::metadata(REPAIR_DISK_PATH).await.is_ok() {
RepairStrategy::Aggressive
} else {
@@ -164,13 +160,13 @@ async fn setup_or_init(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Er
.with_ctx(|_| (crate::ErrorKind::Filesystem, REPAIR_DISK_PATH))?;
}
if requires_reboot.0 {
crate::disk::main::export(guid, cfg.datadir()).await?;
crate::disk::main::export(guid, config.datadir()).await?;
Command::new("reboot")
.invoke(crate::ErrorKind::Unknown)
.await?;
}
tracing::info!("Loaded Disk");
crate::init::init(&cfg).await?;
crate::init::init(config).await?;
drop(song);
}
@@ -196,7 +192,7 @@ async fn run_script_if_exists<P: AsRef<Path>>(path: P) {
}
#[instrument(skip_all)]
async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
async fn inner_main(config: &ServerConfig) -> Result<Option<Shutdown>, Error> {
if &*PLATFORM == "raspberrypi" && tokio::fs::metadata(STANDBY_MODE_PATH).await.is_ok() {
tokio::fs::remove_file(STANDBY_MODE_PATH).await?;
Command::new("sync").invoke(ErrorKind::Filesystem).await?;
@@ -208,7 +204,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
run_script_if_exists("/media/embassy/config/preinit.sh").await;
let res = match setup_or_init(cfg_path.clone()).await {
let res = match setup_or_init(config).await {
Err(e) => {
async move {
tracing::error!("{}", e.source);
@@ -216,7 +212,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
crate::sound::BEETHOVEN.play().await?;
let ctx = DiagnosticContext::init(
cfg_path,
config,
if tokio::fs::metadata("/media/embassy/config/disk.guid")
.await
.is_ok()
@@ -231,14 +227,12 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
None
},
e,
)
.await?;
)?;
let server = WebServer::diagnostic(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
ctx.clone(),
)
.await?;
)?;
let shutdown = ctx.shutdown.subscribe().recv().await.unwrap();
@@ -256,23 +250,13 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
res
}
pub fn main() {
let matches = clap::App::new("start-init")
.arg(
clap::Arg::with_name("config")
.short('c')
.long("config")
.takes_value(true),
)
.get_matches();
let cfg_path = matches.value_of("config").map(|p| Path::new(p).to_owned());
pub fn main(config: &ServerConfig) {
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("failed to initialize runtime");
rt.block_on(inner_main(cfg_path))
rt.block_on(inner_main(config))
};
match res {

View File

@@ -1,61 +0,0 @@
use rpc_toolkit::run_cli;
use rpc_toolkit::yajrc::RpcError;
use serde_json::Value;
use crate::context::SdkContext;
use crate::util::logger::EmbassyLogger;
use crate::version::{Current, VersionT};
use crate::Error;
lazy_static::lazy_static! {
static ref VERSION_STRING: String = Current::new().semver().to_string();
}
fn inner_main() -> Result<(), Error> {
run_cli!({
command: crate::portable_api,
app: app => app
.name("StartOS SDK")
.version(&**VERSION_STRING)
.arg(
clap::Arg::with_name("config")
.short('c')
.long("config")
.takes_value(true),
),
context: matches => {
if let Err(_) = std::env::var("RUST_LOG") {
std::env::set_var("RUST_LOG", "embassy=warn,js_engine=warn");
}
EmbassyLogger::init();
SdkContext::init(matches)?
},
exit: |e: RpcError| {
match e.data {
Some(Value::String(s)) => eprintln!("{}: {}", e.message, s),
Some(Value::Object(o)) => if let Some(Value::String(s)) = o.get("details") {
eprintln!("{}: {}", e.message, s);
if let Some(Value::String(s)) = o.get("debug") {
tracing::debug!("{}", s)
}
}
Some(a) => eprintln!("{}: {}", e.message, a),
None => eprintln!("{}", e.message),
}
std::process::exit(e.code);
}
});
Ok(())
}
pub fn main() {
match inner_main() {
Ok(_) => (),
Err(e) => {
eprintln!("{}", e.source);
tracing::debug!("{:?}", e.source);
drop(e.source);
std::process::exit(e.kind as i32)
}
}
}

View File

@@ -1,12 +1,15 @@
use std::ffi::OsString;
use std::net::{Ipv6Addr, SocketAddr};
use std::path::{Path, PathBuf};
use std::path::Path;
use std::sync::Arc;
use clap::Parser;
use color_eyre::eyre::eyre;
use futures::{FutureExt, TryFutureExt};
use tokio::signal::unix::signal;
use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::context::{DiagnosticContext, RpcContext};
use crate::net::web_server::WebServer;
use crate::shutdown::Shutdown;
@@ -15,10 +18,10 @@ use crate::util::logger::EmbassyLogger;
use crate::{Error, ErrorKind, ResultExt};
#[instrument(skip_all)]
async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error> {
async fn inner_main(config: &ServerConfig) -> Result<Option<Shutdown>, Error> {
let (rpc_ctx, server, shutdown) = async {
let rpc_ctx = RpcContext::init(
cfg_path,
config,
Arc::new(
tokio::fs::read_to_string("/media/embassy/config/disk.guid") // unique identifier for volume group - keeps track of the disk that goes with your embassy
.await?
@@ -31,8 +34,7 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
let server = WebServer::main(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
rpc_ctx.clone(),
)
.await?;
)?;
let mut shutdown_recv = rpc_ctx.shutdown.subscribe();
@@ -102,32 +104,23 @@ async fn inner_main(cfg_path: Option<PathBuf>) -> Result<Option<Shutdown>, Error
Ok(shutdown)
}
pub fn main() {
pub fn main(args: impl IntoIterator<Item = OsString>) {
EmbassyLogger::init();
let config = ServerConfig::parse_from(args).load().unwrap();
if !Path::new("/run/embassy/initialized").exists() {
super::start_init::main();
super::start_init::main(&config);
std::fs::write("/run/embassy/initialized", "").unwrap();
}
let matches = clap::App::new("startd")
.arg(
clap::Arg::with_name("config")
.short('c')
.long("config")
.takes_value(true),
)
.get_matches();
let cfg_path = matches.value_of("config").map(|p| Path::new(p).to_owned());
let res = {
let rt = tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("failed to initialize runtime");
rt.block_on(async {
match inner_main(cfg_path.clone()).await {
match inner_main(&config).await {
Ok(a) => Ok(a),
Err(e) => {
async {
@@ -135,7 +128,7 @@ pub fn main() {
tracing::debug!("{:?}", e.source);
crate::sound::BEETHOVEN.play().await?;
let ctx = DiagnosticContext::init(
cfg_path,
&config,
if tokio::fs::metadata("/media/embassy/config/disk.guid")
.await
.is_ok()
@@ -150,14 +143,12 @@ pub fn main() {
None
},
e,
)
.await?;
)?;
let server = WebServer::diagnostic(
SocketAddr::new(Ipv6Addr::UNSPECIFIED.into(), 80),
ctx.clone(),
)
.await?;
)?;
let mut shutdown = ctx.shutdown.subscribe();

View File

@@ -1,22 +1,12 @@
use std::collections::{BTreeMap, BTreeSet};
use color_eyre::eyre::eyre;
use models::ImageId;
use patch_db::HasModel;
use models::PackageId;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use super::{Config, ConfigSpec};
use crate::context::RpcContext;
use crate::dependencies::Dependencies;
#[allow(unused_imports)]
use crate::prelude::*;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::status::health_check::HealthCheckId;
use crate::util::Version;
use crate::volume::Volumes;
use crate::{Error, ResultExt};
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
@@ -25,90 +15,6 @@ pub struct ConfigRes {
pub spec: ConfigSpec,
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
pub struct ConfigActions {
pub get: PackageProcedure,
pub set: PackageProcedure,
}
impl ConfigActions {
#[instrument(skip_all)]
pub fn validate(
&self,
_container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
self.get
.validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Get"))?;
self.set
.validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Config Set"))?;
Ok(())
}
#[instrument(skip_all)]
pub async fn get(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
volumes: &Volumes,
) -> Result<ConfigRes, Error> {
self.get
.execute(
ctx,
pkg_id,
pkg_version,
ProcedureName::GetConfig,
volumes,
None::<()>,
None,
)
.await
.and_then(|res| {
res.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::ConfigGen))
})
}
#[instrument(skip_all)]
pub async fn set(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
dependencies: &Dependencies,
volumes: &Volumes,
input: &Config,
) -> Result<SetResult, Error> {
let res: SetResult = self
.set
.execute(
ctx,
pkg_id,
pkg_version,
ProcedureName::SetConfig,
volumes,
Some(input),
None,
)
.await
.and_then(|res| {
res.map_err(|e| {
Error::new(eyre!("{}", e.1), crate::ErrorKind::ConfigRulesViolation)
})
})?;
Ok(SetResult {
depends_on: res
.depends_on
.into_iter()
.filter(|(pkg, _)| dependencies.0.contains_key(pkg))
.collect(),
})
}
}
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct SetResult {

View File

@@ -1,24 +1,22 @@
use std::collections::BTreeMap;
use std::path::PathBuf;
use std::sync::Arc;
use std::time::Duration;
use clap::Parser;
use color_eyre::eyre::eyre;
use indexmap::IndexSet;
use itertools::Itertools;
use models::{ErrorKind, OptionExt};
use models::{ErrorKind, OptionExt, PackageId};
use patch_db::value::InternedString;
use patch_db::Value;
use regex::Regex;
use rpc_toolkit::command;
use rpc_toolkit::{from_fn_async, Empty, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::util::display_none;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
use crate::Error;
use crate::util::serde::{HandlerExtSerde, StdinDeserializable};
pub mod action;
pub mod spec;
@@ -132,96 +130,107 @@ pub enum MatchError {
ListUniquenessViolation,
}
#[command(rename = "config-spec", cli_only, blocking, display(display_none))]
pub fn verify_spec(#[arg] path: PathBuf) -> Result<(), Error> {
let mut file = std::fs::File::open(&path)?;
let format = match path.extension().and_then(|s| s.to_str()) {
Some("yaml") | Some("yml") => IoFormat::Yaml,
Some("json") => IoFormat::Json,
Some("toml") => IoFormat::Toml,
Some("cbor") => IoFormat::Cbor,
_ => {
return Err(Error::new(
eyre!("Unknown file format. Expected one of yaml, json, toml, cbor."),
crate::ErrorKind::Deserialization,
));
}
};
let _: ConfigSpec = format.from_reader(&mut file)?;
Ok(())
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ConfigParams {
pub id: PackageId,
}
#[command(subcommands(get, set))]
pub fn config(#[arg] id: PackageId) -> Result<PackageId, Error> {
Ok(id)
// #[command(subcommands(get, set))]
pub fn config() -> ParentHandler<ConfigParams> {
ParentHandler::new()
.subcommand(
"get",
from_fn_async(get)
.with_inherited(|ConfigParams { id }, _| id)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
.subcommand("set", set().with_inherited(|ConfigParams { id }, _| id))
}
#[command(display(display_serializable))]
#[instrument(skip_all)]
pub async fn get(
#[context] ctx: RpcContext,
#[parent_data] id: PackageId,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<ConfigRes, Error> {
let db = ctx.db.peek().await;
let manifest = db
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.as_installed()
.or_not_found(&id)?
.as_manifest();
let action = manifest
.as_config()
.de()?
.ok_or_else(|| Error::new(eyre!("{} has no config", id), crate::ErrorKind::NotFound))?;
let volumes = manifest.as_volumes().de()?;
let version = manifest.as_version().de()?;
action.get(&ctx, &id, &version, &volumes).await
pub async fn get(ctx: RpcContext, _: Empty, id: PackageId) -> Result<ConfigRes, Error> {
ctx.services
.get(&id)
.await
.as_ref()
.or_not_found(lazy_format!("Manager for {id}"))?
.get_config()
.await
}
#[command(
subcommands(self(set_impl(async, context(RpcContext))), set_dry),
display(display_none),
metadata(sync_db = true)
)]
#[instrument(skip_all)]
pub fn set(
#[parent_data] id: PackageId,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
#[arg(long = "timeout")] timeout: Option<crate::util::serde::Duration>,
#[arg(stdin, parse(parse_stdin_deserializable))] config: Option<Config>,
) -> Result<(PackageId, Option<Config>, Option<Duration>), Error> {
Ok((id, config, timeout.map(|d| *d)))
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
pub struct SetParams {
#[arg(long = "timeout")]
pub timeout: Option<crate::util::serde::Duration>,
#[command(flatten)]
pub config: StdinDeserializable<Option<Config>>,
}
#[command(rename = "dry", display(display_serializable))]
// TODO Dr Why isn't this used?
// #[command(
// subcommands(self(set_impl(async, context(RpcContext))), set_dry),
// display(display_none),
// metadata(sync_db = true)
// )]
#[instrument(skip_all)]
pub fn set() -> ParentHandler<SetParams, PackageId> {
ParentHandler::new()
.root_handler(
from_fn_async(set_impl)
.with_metadata("sync_db", Value::Bool(true))
.with_inherited(|set_params, id| (id, set_params))
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"dry",
from_fn_async(set_dry)
.with_inherited(|set_params, id| (id, set_params))
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
}
pub async fn set_dry(
#[context] ctx: RpcContext,
#[parent_data] (id, config, timeout): (PackageId, Option<Config>, Option<Duration>),
ctx: RpcContext,
_: Empty,
(
id,
SetParams {
timeout,
config: StdinDeserializable(config),
},
): (PackageId, SetParams),
) -> Result<BTreeMap<PackageId, String>, Error> {
let breakages = BTreeMap::new();
let overrides = Default::default();
let configure_context = ConfigureContext {
breakages,
timeout,
timeout: timeout.map(|t| *t),
config,
dry_run: true,
overrides,
};
let breakages = configure(&ctx, &id, configure_context).await?;
Ok(breakages)
ctx.services
.get(&id)
.await
.as_ref()
.ok_or_else(|| {
Error::new(
eyre!("There is no manager running for {id}"),
ErrorKind::Unknown,
)
})?
.configure(configure_context)
.await
}
#[derive(Default)]
pub struct ConfigureContext {
pub breakages: BTreeMap<PackageId, String>,
pub timeout: Option<Duration>,
@@ -233,55 +242,36 @@ pub struct ConfigureContext {
#[instrument(skip_all)]
pub async fn set_impl(
ctx: RpcContext,
(id, config, timeout): (PackageId, Option<Config>, Option<Duration>),
_: Empty,
(
id,
SetParams {
timeout,
config: StdinDeserializable(config),
},
): (PackageId, SetParams),
) -> Result<(), Error> {
let breakages = BTreeMap::new();
let overrides = Default::default();
let configure_context = ConfigureContext {
breakages,
timeout,
timeout: timeout.map(|t| *t),
config,
dry_run: false,
overrides,
};
configure(&ctx, &id, configure_context).await?;
Ok(())
}
#[instrument(skip_all)]
pub async fn configure(
ctx: &RpcContext,
id: &PackageId,
configure_context: ConfigureContext,
) -> Result<BTreeMap<PackageId, String>, Error> {
let db = ctx.db.peek().await;
let package = db
.as_package_data()
.as_idx(id)
.or_not_found(&id)?
.as_installed()
.or_not_found(&id)?;
let version = package.as_manifest().as_version().de()?;
ctx.managers
.get(&(id.clone(), version.clone()))
ctx.services
.get(&id)
.await
.as_ref()
.ok_or_else(|| {
Error::new(
eyre!("There is no manager running for {id:?} and {version:?}"),
eyre!("There is no manager running for {id}"),
ErrorKind::Unknown,
)
})?
.configure(configure_context)
.await
.await?;
Ok(())
}
macro_rules! not_found {
($x:expr) => {
crate::Error::new(
color_eyre::eyre::eyre!("Could not find {} at {}:{}", $x, module_path!(), line!()),
crate::ErrorKind::Incoherent,
)
};
}
pub(crate) use not_found;

View File

@@ -14,6 +14,7 @@ use imbl_value::InternedString;
use indexmap::{IndexMap, IndexSet};
use itertools::Itertools;
use jsonpath_lib::Compiled as CompiledJsonPath;
use models::ProcedureName;
use patch_db::value::{Number, Value};
use rand::{CryptoRng, Rng};
use regex::Regex;
@@ -23,6 +24,7 @@ use sqlx::PgPool;
use super::util::{self, CharSet, NumRange, UniqueBy, STATIC_NULL};
use super::{Config, MatchError, NoMatchWithPath, TimeoutError, TypeOf};
use crate::config::action::ConfigRes;
use crate::config::ConfigurationError;
use crate::context::RpcContext;
use crate::net::interface::InterfaceId;
@@ -1773,27 +1775,27 @@ impl ConfigPointer {
Ok(self.select(&Value::Object(cfg.clone())))
} else {
let id = &self.package_id;
let db = ctx.db.peek().await;
let manifest = db.as_package_data().as_idx(id).map(|pde| pde.as_manifest());
let cfg_actions = manifest.and_then(|m| m.as_config().transpose_ref());
if let (Some(manifest), Some(cfg_actions)) = (manifest, cfg_actions) {
let cfg_res = cfg_actions
.de()
.map_err(|e| ConfigurationError::SystemError(e))?
.get(
ctx,
&self.package_id,
&manifest
.as_version()
.de()
.map_err(|e| ConfigurationError::SystemError(e))?,
&manifest
.as_volumes()
.de()
.map_err(|e| ConfigurationError::SystemError(e))?,
)
let version = ctx
.db
.peek()
.await
.as_package_data()
.as_idx(id)
.and_then(|pde| pde.as_installed())
.map(|i| i.as_manifest().as_version().de())
.transpose()
.map_err(ConfigurationError::SystemError)?;
if let Some(version) = version {
let cfg_res = ctx
.services
.get(&id)
.await
.map_err(|e| ConfigurationError::SystemError(e))?;
.as_ref()
.or_not_found(lazy_format!("Manager for {id}@{version}"))
.map_err(|e| ConfigurationError::SystemError(e))?
.get_config()
.await
.map_err(ConfigurationError::SystemError)?;
if let Some(cfg) = cfg_res.config {
Ok(self.select(&Value::Object(cfg)))
} else {

View File

@@ -1,43 +1,37 @@
use std::fs::File;
use std::io::BufReader;
use std::net::Ipv4Addr;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use clap::ArgMatches;
use color_eyre::eyre::eyre;
use cookie_store::{CookieStore, RawCookie};
use josekit::jwk::Jwk;
use once_cell::sync::OnceCell;
use reqwest::Proxy;
use reqwest_cookie_store::CookieStoreMutex;
use rpc_toolkit::reqwest::{Client, Url};
use rpc_toolkit::url::Host;
use rpc_toolkit::Context;
use serde::Deserialize;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{call_remote_http, CallRemote, Context};
use tokio::net::TcpStream;
use tokio::runtime::Runtime;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream};
use tracing::instrument;
use super::setup::CURRENT_SECRET;
use crate::context::config::{local_config_path, ClientConfig};
use crate::core::rpc_continuations::RequestGuid;
use crate::middleware::auth::LOCAL_AUTH_COOKIE_PATH;
use crate::util::config::{load_config_from_paths, local_config_path};
use crate::ResultExt;
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct CliContextConfig {
pub host: Option<Url>,
#[serde(deserialize_with = "crate::util::serde::deserialize_from_str_opt")]
#[serde(default)]
pub proxy: Option<Url>,
pub cookie_path: Option<PathBuf>,
}
use crate::prelude::*;
#[derive(Debug)]
pub struct CliContextSeed {
pub runtime: OnceCell<Runtime>,
pub base_url: Url,
pub rpc_url: Url,
pub client: Client,
pub cookie_store: Arc<CookieStoreMutex>,
pub cookie_path: PathBuf,
pub developer_key_path: PathBuf,
pub developer_key: OnceCell<ed25519_dalek::SigningKey>,
}
impl Drop for CliContextSeed {
fn drop(&mut self) {
@@ -60,42 +54,22 @@ impl Drop for CliContextSeed {
}
}
const DEFAULT_HOST: Host<&'static str> = Host::Ipv4(Ipv4Addr::new(127, 0, 0, 1));
const DEFAULT_PORT: u16 = 5959;
#[derive(Debug, Clone)]
pub struct CliContext(Arc<CliContextSeed>);
impl CliContext {
/// BLOCKING
#[instrument(skip_all)]
pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> {
let local_config_path = local_config_path();
let base: CliContextConfig = load_config_from_paths(
matches
.values_of("config")
.into_iter()
.flatten()
.map(|p| Path::new(p))
.chain(local_config_path.as_deref().into_iter())
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)?;
let mut url = if let Some(host) = matches.value_of("host") {
host.parse()?
} else if let Some(host) = base.host {
pub fn init(config: ClientConfig) -> Result<Self, Error> {
let mut url = if let Some(host) = config.host {
host
} else {
"http://localhost".parse()?
};
let proxy = if let Some(proxy) = matches.value_of("proxy") {
Some(proxy.parse()?)
} else {
base.proxy
};
let cookie_path = base.cookie_path.unwrap_or_else(|| {
local_config_path
let cookie_path = config.cookie_path.unwrap_or_else(|| {
local_config_path()
.as_deref()
.unwrap_or_else(|| Path::new(crate::util::config::CONFIG_PATH))
.unwrap_or_else(|| Path::new(super::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join(".cookies.json")
@@ -120,6 +94,7 @@ impl CliContext {
}));
Ok(CliContext(Arc::new(CliContextSeed {
runtime: OnceCell::new(),
base_url: url.clone(),
rpc_url: {
url.path_segments_mut()
@@ -131,7 +106,7 @@ impl CliContext {
},
client: {
let mut builder = Client::builder().cookie_provider(cookie_store.clone());
if let Some(proxy) = proxy {
if let Some(proxy) = config.proxy {
builder =
builder.proxy(Proxy::all(proxy).with_kind(crate::ErrorKind::ParseUrl)?)
}
@@ -139,8 +114,90 @@ impl CliContext {
},
cookie_store,
cookie_path,
developer_key_path: config.developer_key_path.unwrap_or_else(|| {
local_config_path()
.as_deref()
.unwrap_or_else(|| Path::new(super::config::CONFIG_PATH))
.parent()
.unwrap_or(Path::new("/"))
.join("developer.key.pem")
}),
developer_key: OnceCell::new(),
})))
}
/// BLOCKING
#[instrument(skip_all)]
pub fn developer_key(&self) -> Result<&ed25519_dalek::SigningKey, Error> {
self.developer_key.get_or_try_init(|| {
if !self.developer_key_path.exists() {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `start-cli init` before running this command."), crate::ErrorKind::Uninitialized));
}
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
&std::fs::read_to_string(&self.developer_key_path)?,
)
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
)
})?;
Ok(secret.into())
})
}
pub async fn ws_continuation(
&self,
guid: RequestGuid,
) -> Result<WebSocketStream<MaybeTlsStream<TcpStream>>, Error> {
let mut url = self.base_url.clone();
let ws_scheme = match url.scheme() {
"https" => "wss",
"http" => "ws",
_ => {
return Err(Error::new(
eyre!("Cannot parse scheme from base URL"),
crate::ErrorKind::ParseUrl,
)
.into())
}
};
url.set_scheme(ws_scheme)
.map_err(|_| Error::new(eyre!("Cannot set URL scheme"), crate::ErrorKind::ParseUrl))?;
url.path_segments_mut()
.map_err(|_| eyre!("Url cannot be base"))
.with_kind(crate::ErrorKind::ParseUrl)?
.push("ws")
.push("rpc")
.push(guid.as_ref());
let (stream, _) =
// base_url is "http://127.0.0.1/", with a trailing slash, so we don't put a leading slash in this path:
tokio_tungstenite::connect_async(url).await.with_kind(ErrorKind::Network)?;
Ok(stream)
}
pub async fn rest_continuation(
&self,
guid: RequestGuid,
body: reqwest::Body,
headers: reqwest::header::HeaderMap,
) -> Result<reqwest::Response, Error> {
let mut url = self.base_url.clone();
url.path_segments_mut()
.map_err(|_| eyre!("Url cannot be base"))
.with_kind(crate::ErrorKind::ParseUrl)?
.push("rest")
.push("rpc")
.push(guid.as_ref());
self.client
.post(url)
.headers(headers)
.body(body)
.send()
.await
.with_kind(ErrorKind::Network)
}
}
impl AsRef<Jwk> for CliContext {
fn as_ref(&self) -> &Jwk {
@@ -154,32 +211,33 @@ impl std::ops::Deref for CliContext {
}
}
impl Context for CliContext {
fn protocol(&self) -> &str {
self.0.base_url.scheme()
}
fn host(&self) -> Host<&str> {
self.0.base_url.host().unwrap_or(DEFAULT_HOST)
}
fn port(&self) -> u16 {
self.0.base_url.port().unwrap_or(DEFAULT_PORT)
}
fn path(&self) -> &str {
self.0.rpc_url.path()
}
fn url(&self) -> Url {
self.0.rpc_url.clone()
}
fn client(&self) -> &Client {
&self.0.client
fn runtime(&self) -> tokio::runtime::Handle {
self.runtime
.get_or_init(|| {
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
})
.handle()
.clone()
}
}
/// When we had an empty proxy the system wasn't working like it used to, which allowed empty proxy
#[async_trait::async_trait]
impl CallRemote for CliContext {
async fn call_remote(&self, method: &str, params: Value) -> Result<Value, RpcError> {
call_remote_http(&self.client, self.rpc_url.clone(), method, params).await
}
}
#[test]
fn test_cli_proxy_empty() {
serde_yaml::from_str::<CliContextConfig>(
"
bind_rpc:
",
)
.unwrap();
fn test() {
let ctx = CliContext::init(ClientConfig::default()).unwrap();
ctx.runtime().block_on(async {
reqwest::Client::new()
.get("http://example.com")
.send()
.await
.unwrap();
});
}

View File

@@ -0,0 +1,175 @@
use std::fs::File;
use std::net::SocketAddr;
use std::path::{Path, PathBuf};
use clap::Parser;
use patch_db::json_ptr::JsonPointer;
use reqwest::Url;
use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use sqlx::postgres::PgConnectOptions;
use sqlx::PgPool;
use crate::account::AccountInfo;
use crate::db::model::Database;
use crate::disk::OsPartitionInfo;
use crate::init::init_postgres;
use crate::prelude::*;
use crate::util::serde::IoFormat;
pub const DEVICE_CONFIG_PATH: &str = "/media/embassy/config/config.yaml"; // "/media/startos/config/config.yaml";
pub const CONFIG_PATH: &str = "/etc/startos/config.yaml";
pub const CONFIG_PATH_LOCAL: &str = ".startos/config.yaml";
pub fn local_config_path() -> Option<PathBuf> {
if let Ok(home) = std::env::var("HOME") {
Some(Path::new(&home).join(CONFIG_PATH_LOCAL))
} else {
None
}
}
pub trait ContextConfig: DeserializeOwned + Default {
fn next(&mut self) -> Option<PathBuf>;
fn merge_with(&mut self, other: Self);
fn from_path(path: impl AsRef<Path>) -> Result<Self, Error> {
let format: IoFormat = path
.as_ref()
.extension()
.and_then(|s| s.to_str())
.map(|f| f.parse())
.transpose()?
.unwrap_or_default();
format.from_reader(File::open(path)?)
}
fn load_path_rec(&mut self, path: Option<impl AsRef<Path>>) -> Result<(), Error> {
if let Some(path) = path.filter(|p| p.as_ref().exists()) {
let mut other = Self::from_path(path)?;
let path = other.next();
self.merge_with(other);
self.load_path_rec(path)?;
}
Ok(())
}
}
#[derive(Debug, Default, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ClientConfig {
#[arg(short = 'c', long = "config")]
pub config: Option<PathBuf>,
#[arg(short = 'h', long = "host")]
pub host: Option<Url>,
#[arg(short = 'p', long = "proxy")]
pub proxy: Option<Url>,
#[arg(long = "cookie-path")]
pub cookie_path: Option<PathBuf>,
#[arg(long = "developer-key-path")]
pub developer_key_path: Option<PathBuf>,
}
impl ContextConfig for ClientConfig {
fn next(&mut self) -> Option<PathBuf> {
self.config.take()
}
fn merge_with(&mut self, other: Self) {
self.host = self.host.take().or(other.host);
self.proxy = self.proxy.take().or(other.proxy);
self.cookie_path = self.cookie_path.take().or(other.cookie_path);
}
}
impl ClientConfig {
pub fn load(mut self) -> Result<Self, Error> {
let path = self.next();
self.load_path_rec(path)?;
self.load_path_rec(local_config_path())?;
self.load_path_rec(Some(CONFIG_PATH))?;
Ok(self)
}
}
#[derive(Debug, Clone, Default, Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ServerConfig {
#[arg(short = 'c', long = "config")]
pub config: Option<PathBuf>,
#[arg(long = "wifi-interface")]
pub wifi_interface: Option<String>,
#[arg(long = "ethernet-interface")]
pub ethernet_interface: Option<String>,
#[arg(skip)]
pub os_partitions: Option<OsPartitionInfo>,
#[arg(long = "bind-rpc")]
pub bind_rpc: Option<SocketAddr>,
#[arg(long = "tor-control")]
pub tor_control: Option<SocketAddr>,
#[arg(long = "tor-socks")]
pub tor_socks: Option<SocketAddr>,
#[arg(long = "dns-bind")]
pub dns_bind: Option<Vec<SocketAddr>>,
#[arg(long = "revision-cache-size")]
pub revision_cache_size: Option<usize>,
#[arg(short = 'd', long = "datadir")]
pub datadir: Option<PathBuf>,
#[arg(long = "disable-encryption")]
pub disable_encryption: Option<bool>,
}
impl ContextConfig for ServerConfig {
fn next(&mut self) -> Option<PathBuf> {
self.config.take()
}
fn merge_with(&mut self, other: Self) {
self.wifi_interface = self.wifi_interface.take().or(other.wifi_interface);
self.ethernet_interface = self.ethernet_interface.take().or(other.ethernet_interface);
self.os_partitions = self.os_partitions.take().or(other.os_partitions);
self.bind_rpc = self.bind_rpc.take().or(other.bind_rpc);
self.tor_control = self.tor_control.take().or(other.tor_control);
self.tor_socks = self.tor_socks.take().or(other.tor_socks);
self.dns_bind = self.dns_bind.take().or(other.dns_bind);
self.revision_cache_size = self
.revision_cache_size
.take()
.or(other.revision_cache_size);
self.datadir = self.datadir.take().or(other.datadir);
self.disable_encryption = self.disable_encryption.take().or(other.disable_encryption);
}
}
impl ServerConfig {
pub fn load(mut self) -> Result<Self, Error> {
let path = self.next();
self.load_path_rec(path)?;
self.load_path_rec(Some(DEVICE_CONFIG_PATH))?;
self.load_path_rec(Some(CONFIG_PATH))?;
Ok(self)
}
pub fn datadir(&self) -> &Path {
self.datadir
.as_deref()
.unwrap_or_else(|| Path::new("/embassy-data"))
}
pub async fn db(&self, account: &AccountInfo) -> Result<PatchDb, Error> {
let db_path = self.datadir().join("main").join("embassy.db");
let db = PatchDb::open(&db_path)
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, db_path.display().to_string()))?;
if !db.exists(&<JsonPointer>::default()).await {
db.put(&<JsonPointer>::default(), &Database::init(account))
.await?;
}
Ok(db)
}
#[instrument(skip_all)]
pub async fn secret_store(&self) -> Result<PgPool, Error> {
init_postgres(self.datadir()).await?;
let secret_store =
PgPool::connect_with(PgConnectOptions::new().database("secrets").username("root"))
.await?;
sqlx::migrate!()
.run(&secret_store)
.await
.with_kind(crate::ErrorKind::Database)?;
Ok(secret_store)
}
}

View File

@@ -1,47 +1,16 @@
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::sync::Arc;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::Context;
use serde::Deserialize;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::context::config::ServerConfig;
use crate::shutdown::Shutdown;
use crate::util::config::load_config_from_paths;
use crate::Error;
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct DiagnosticContextConfig {
pub datadir: Option<PathBuf>,
}
impl DiagnosticContextConfig {
#[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || {
load_config_from_paths(
path.as_ref()
.into_iter()
.map(|p| p.as_ref())
.chain(std::iter::once(Path::new(
crate::util::config::DEVICE_CONFIG_PATH,
)))
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)
})
.await
.unwrap()
}
pub fn datadir(&self) -> &Path {
self.datadir
.as_deref()
.unwrap_or_else(|| Path::new("/embassy-data"))
}
}
pub struct DiagnosticContextSeed {
pub datadir: PathBuf,
pub shutdown: Sender<Option<Shutdown>>,
@@ -53,20 +22,18 @@ pub struct DiagnosticContextSeed {
pub struct DiagnosticContext(Arc<DiagnosticContextSeed>);
impl DiagnosticContext {
#[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>(
path: Option<P>,
pub fn init(
config: &ServerConfig,
disk_guid: Option<Arc<String>>,
error: Error,
) -> Result<Self, Error> {
tracing::error!("Error: {}: Starting diagnostic UI", error);
tracing::debug!("{:?}", error);
let cfg = DiagnosticContextConfig::load(path).await?;
let (shutdown, _) = tokio::sync::broadcast::channel(1);
Ok(Self(Arc::new(DiagnosticContextSeed {
datadir: cfg.datadir().to_owned(),
datadir: config.datadir().to_owned(),
shutdown,
disk_guid,
error: Arc::new(error.into()),

View File

@@ -1,35 +1,13 @@
use std::ops::Deref;
use std::path::Path;
use std::sync::Arc;
use rpc_toolkit::Context;
use serde::Deserialize;
use tokio::sync::broadcast::Sender;
use tracing::instrument;
use crate::net::utils::find_eth_iface;
use crate::util::config::load_config_from_paths;
use crate::Error;
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct InstallContextConfig {}
impl InstallContextConfig {
#[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || {
load_config_from_paths(
path.as_ref()
.into_iter()
.map(|p| p.as_ref())
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)
})
.await
.unwrap()
}
}
pub struct InstallContextSeed {
pub ethernet_interface: String,
pub shutdown: Sender<()>,
@@ -39,8 +17,7 @@ pub struct InstallContextSeed {
pub struct InstallContext(Arc<InstallContextSeed>);
impl InstallContext {
#[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
let _cfg = InstallContextConfig::load(path.as_ref().map(|p| p.as_ref().to_owned())).await?;
pub async fn init() -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
Ok(Self(Arc::new(InstallContextSeed {
ethernet_interface: find_eth_iface().await?,

View File

@@ -1,44 +1,12 @@
pub mod cli;
pub mod config;
pub mod diagnostic;
pub mod install;
pub mod rpc;
pub mod sdk;
pub mod setup;
pub use cli::CliContext;
pub use diagnostic::DiagnosticContext;
pub use install::InstallContext;
pub use rpc::RpcContext;
pub use sdk::SdkContext;
pub use setup::SetupContext;
impl From<CliContext> for () {
fn from(_: CliContext) -> Self {
()
}
}
impl From<DiagnosticContext> for () {
fn from(_: DiagnosticContext) -> Self {
()
}
}
impl From<RpcContext> for () {
fn from(_: RpcContext) -> Self {
()
}
}
impl From<SdkContext> for () {
fn from(_: SdkContext) -> Self {
()
}
}
impl From<SetupContext> for () {
fn from(_: SetupContext) -> Self {
()
}
}
impl From<InstallContext> for () {
fn from(_: InstallContext) -> Self {
()
}
}

View File

@@ -1,19 +1,16 @@
use std::collections::BTreeMap;
use std::net::{Ipv4Addr, SocketAddr, SocketAddrV4};
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::time::Duration;
use helpers::to_tmp_path;
use imbl_value::InternedString;
use josekit::jwk::Jwk;
use patch_db::json_ptr::JsonPointer;
use patch_db::PatchDb;
use reqwest::{Client, Proxy, Url};
use reqwest::{Client, Proxy};
use rpc_toolkit::Context;
use serde::Deserialize;
use sqlx::postgres::PgConnectOptions;
use sqlx::PgPool;
use tokio::sync::{broadcast, oneshot, Mutex, RwLock};
use tokio::time::Instant;
@@ -21,87 +18,26 @@ use tracing::instrument;
use super::setup::CURRENT_SECRET;
use crate::account::AccountInfo;
use crate::core::rpc_continuations::{RequestGuid, RestHandler, RpcContinuation};
use crate::db::model::{CurrentDependents, Database, PackageDataEntryMatchModelRef};
use crate::context::config::ServerConfig;
use crate::core::rpc_continuations::{RequestGuid, RestHandler, RpcContinuation, WebSocketHandler};
use crate::db::model::CurrentDependents;
use crate::db::prelude::PatchDbExt;
use crate::dependencies::compute_dependency_config_errs;
use crate::disk::OsPartitionInfo;
use crate::init::{check_time_is_synchronized, init_postgres};
use crate::install::cleanup::{cleanup_failed, uninstall};
use crate::manager::ManagerMap;
use crate::init::check_time_is_synchronized;
use crate::lxc::{LxcContainer, LxcManager};
use crate::middleware::auth::HashSessionToken;
use crate::net::net_controller::NetController;
use crate::net::ssl::{root_ca_start_time, SslManager};
use crate::net::utils::find_eth_iface;
use crate::net::wifi::WpaCli;
use crate::notifications::NotificationManager;
use crate::prelude::*;
use crate::service::ServiceMap;
use crate::shutdown::Shutdown;
use crate::status::MainStatus;
use crate::system::get_mem_info;
use crate::util::config::load_config_from_paths;
use crate::util::lshw::{lshw, LshwDevice};
use crate::{Error, ErrorKind, ResultExt};
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct RpcContextConfig {
pub wifi_interface: Option<String>,
pub ethernet_interface: String,
pub os_partitions: OsPartitionInfo,
pub migration_batch_rows: Option<usize>,
pub migration_prefetch_rows: Option<usize>,
pub bind_rpc: Option<SocketAddr>,
pub tor_control: Option<SocketAddr>,
pub tor_socks: Option<SocketAddr>,
pub dns_bind: Option<Vec<SocketAddr>>,
pub revision_cache_size: Option<usize>,
pub datadir: Option<PathBuf>,
pub log_server: Option<Url>,
}
impl RpcContextConfig {
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || {
load_config_from_paths(
path.as_ref()
.into_iter()
.map(|p| p.as_ref())
.chain(std::iter::once(Path::new(
crate::util::config::DEVICE_CONFIG_PATH,
)))
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)
})
.await
.unwrap()
}
pub fn datadir(&self) -> &Path {
self.datadir
.as_deref()
.unwrap_or_else(|| Path::new("/embassy-data"))
}
pub async fn db(&self, account: &AccountInfo) -> Result<PatchDb, Error> {
let db_path = self.datadir().join("main").join("embassy.db");
let db = PatchDb::open(&db_path)
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, db_path.display().to_string()))?;
if !db.exists(&<JsonPointer>::default()).await {
db.put(&<JsonPointer>::default(), &Database::init(account))
.await?;
}
Ok(db)
}
#[instrument(skip_all)]
pub async fn secret_store(&self) -> Result<PgPool, Error> {
init_postgres(self.datadir()).await?;
let secret_store =
PgPool::connect_with(PgConnectOptions::new().database("secrets").username("root"))
.await?;
sqlx::migrate!()
.run(&secret_store)
.await
.with_kind(crate::ErrorKind::Database)?;
Ok(secret_store)
}
}
pub struct RpcContextSeed {
is_closed: AtomicBool,
@@ -114,11 +50,12 @@ pub struct RpcContextSeed {
pub secret_store: PgPool,
pub account: RwLock<AccountInfo>,
pub net_controller: Arc<NetController>,
pub managers: ManagerMap,
pub services: ServiceMap,
pub metrics_cache: RwLock<Option<crate::system::Metrics>>,
pub shutdown: broadcast::Sender<Option<Shutdown>>,
pub tor_socks: SocketAddr,
pub notification_manager: NotificationManager,
pub lxc_manager: Arc<LxcManager>,
pub open_authed_websockets: Mutex<BTreeMap<HashSessionToken, Vec<oneshot::Sender<()>>>>,
pub rpc_stream_continuations: Mutex<BTreeMap<RequestGuid, RpcContinuation>>,
pub wifi_manager: Option<Arc<RwLock<WpaCli>>>,
@@ -126,6 +63,11 @@ pub struct RpcContextSeed {
pub client: Client,
pub hardware: Hardware,
pub start_time: Instant,
pub dev: Dev,
}
pub struct Dev {
pub lxc: Mutex<BTreeMap<InternedString, LxcContainer>>,
}
pub struct Hardware {
@@ -137,28 +79,26 @@ pub struct Hardware {
pub struct RpcContext(Arc<RpcContextSeed>);
impl RpcContext {
#[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + Sync + 'static>(
cfg_path: Option<P>,
disk_guid: Arc<String>,
) -> Result<Self, Error> {
let base = RpcContextConfig::load(cfg_path).await?;
pub async fn init(config: &ServerConfig, disk_guid: Arc<String>) -> Result<Self, Error> {
tracing::info!("Loaded Config");
let tor_proxy = base.tor_socks.unwrap_or(SocketAddr::V4(SocketAddrV4::new(
let tor_proxy = config.tor_socks.unwrap_or(SocketAddr::V4(SocketAddrV4::new(
Ipv4Addr::new(127, 0, 0, 1),
9050,
)));
let (shutdown, _) = tokio::sync::broadcast::channel(1);
let secret_store = base.secret_store().await?;
let secret_store = config.secret_store().await?;
tracing::info!("Opened Pg DB");
let account = AccountInfo::load(&secret_store).await?;
let db = base.db(&account).await?;
let db = config.db(&account).await?;
tracing::info!("Opened PatchDB");
let net_controller = Arc::new(
NetController::init(
base.tor_control
config
.tor_control
.unwrap_or(SocketAddr::from(([127, 0, 0, 1], 9051))),
tor_proxy,
base.dns_bind
config
.dns_bind
.as_deref()
.unwrap_or(&[SocketAddr::from(([127, 0, 0, 1], 53))]),
SslManager::new(&account, root_ca_start_time().await?)?,
@@ -168,7 +108,7 @@ impl RpcContext {
.await?,
);
tracing::info!("Initialized Net Controller");
let managers = ManagerMap::default();
let services = ServiceMap::default();
let metrics_cache = RwLock::<Option<crate::system::Metrics>>::new(None);
let notification_manager = NotificationManager::new(secret_store.clone());
tracing::info!("Initialized Notification Manager");
@@ -190,24 +130,35 @@ impl RpcContext {
let seed = Arc::new(RpcContextSeed {
is_closed: AtomicBool::new(false),
datadir: base.datadir().to_path_buf(),
os_partitions: base.os_partitions,
wifi_interface: base.wifi_interface.clone(),
ethernet_interface: base.ethernet_interface,
datadir: config.datadir().to_path_buf(),
os_partitions: config.os_partitions.clone().ok_or_else(|| {
Error::new(
eyre!("OS Partition Information Missing"),
ErrorKind::Filesystem,
)
})?,
wifi_interface: config.wifi_interface.clone(),
ethernet_interface: if let Some(eth) = config.ethernet_interface.clone() {
eth
} else {
find_eth_iface().await?
},
disk_guid,
db,
secret_store,
account: RwLock::new(account),
net_controller,
managers,
services,
metrics_cache,
shutdown,
tor_socks: tor_proxy,
notification_manager,
lxc_manager: Arc::new(LxcManager::new()),
open_authed_websockets: Mutex::new(BTreeMap::new()),
rpc_stream_continuations: Mutex::new(BTreeMap::new()),
wifi_manager: base
wifi_manager: config
.wifi_interface
.clone()
.map(|i| Arc::new(RwLock::new(WpaCli::init(i)))),
current_secret: Arc::new(
Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).map_err(|e| {
@@ -231,6 +182,9 @@ impl RpcContext {
.with_kind(crate::ErrorKind::ParseUrl)?,
hardware: Hardware { devices, ram },
start_time: Instant::now(),
dev: Dev {
lxc: Mutex::new(BTreeMap::new()),
},
});
let res = Self(seed.clone());
@@ -241,7 +195,7 @@ impl RpcContext {
#[instrument(skip_all)]
pub async fn shutdown(self) -> Result<(), Error> {
self.managers.empty().await?;
self.services.shutdown_all().await?;
self.secret_store.close().await;
self.is_closed.store(true, Ordering::SeqCst);
tracing::info!("RPC Context is shutdown");
@@ -293,70 +247,11 @@ impl RpcContext {
})
.await?;
let peek = self.db.peek().await;
for (package_id, package) in peek.as_package_data().as_entries()?.into_iter() {
let action = match package.as_match() {
PackageDataEntryMatchModelRef::Installing(_)
| PackageDataEntryMatchModelRef::Restoring(_)
| PackageDataEntryMatchModelRef::Updating(_) => {
cleanup_failed(self, &package_id).await
}
PackageDataEntryMatchModelRef::Removing(_) => {
uninstall(
self,
self.secret_store.acquire().await?.as_mut(),
&package_id,
)
.await
}
PackageDataEntryMatchModelRef::Installed(m) => {
let version = m.as_manifest().as_version().clone().de()?;
let volumes = m.as_manifest().as_volumes().de()?;
for (volume_id, volume_info) in &*volumes {
let tmp_path = to_tmp_path(volume_info.path_for(
&self.datadir,
&package_id,
&version,
volume_id,
))
.with_kind(ErrorKind::Filesystem)?;
if tokio::fs::metadata(&tmp_path).await.is_ok() {
tokio::fs::remove_dir_all(&tmp_path).await?;
}
}
Ok(())
}
_ => continue,
};
if let Err(e) = action {
tracing::error!("Failed to clean up package {}: {}", package_id, e);
tracing::debug!("{:?}", e);
}
}
let peek = self
.db
.mutate(|v| {
for (_, pde) in v.as_package_data_mut().as_entries_mut()? {
let status = pde
.expect_as_installed_mut()?
.as_installed_mut()
.as_status_mut()
.as_main_mut();
let running = status.clone().de()?.running();
status.ser(&if running {
MainStatus::Starting
} else {
MainStatus::Stopped
})?;
}
Ok(v.clone())
})
.await?;
self.managers.init(self.clone(), peek.clone()).await?;
self.services.init(&self).await?;
tracing::info!("Initialized Package Managers");
let mut all_dependency_config_errs = BTreeMap::new();
let peek = self.db.peek().await;
for (package_id, package) in peek.as_package_data().as_entries()?.into_iter() {
let package = package.clone();
if let Some(current_dependencies) = package
@@ -419,33 +314,30 @@ impl RpcContext {
.insert(guid, handler);
}
pub async fn get_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
pub async fn get_ws_continuation_handler(
&self,
guid: &RequestGuid,
) -> Option<WebSocketHandler> {
let mut continuations = self.rpc_stream_continuations.lock().await;
if let Some(cont) = continuations.remove(guid) {
cont.into_handler().await
} else {
None
}
}
pub async fn get_ws_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
let continuations = self.rpc_stream_continuations.lock().await;
if matches!(continuations.get(guid), Some(RpcContinuation::WebSocket(_))) {
drop(continuations);
self.get_continuation_handler(guid).await
} else {
None
if !matches!(continuations.get(guid), Some(RpcContinuation::WebSocket(_))) {
return None;
}
let Some(RpcContinuation::WebSocket(x)) = continuations.remove(guid) else {
return None;
};
x.get().await
}
pub async fn get_rest_continuation_handler(&self, guid: &RequestGuid) -> Option<RestHandler> {
let continuations = self.rpc_stream_continuations.lock().await;
if matches!(continuations.get(guid), Some(RpcContinuation::Rest(_))) {
drop(continuations);
self.get_continuation_handler(guid).await
} else {
None
let mut continuations: tokio::sync::MutexGuard<'_, BTreeMap<RequestGuid, RpcContinuation>> =
self.rpc_stream_continuations.lock().await;
if !matches!(continuations.get(guid), Some(RpcContinuation::Rest(_))) {
return None;
}
let Some(RpcContinuation::Rest(x)) = continuations.remove(guid) else {
return None;
};
x.get().await
}
}
impl AsRef<Jwk> for RpcContext {

View File

@@ -8,13 +8,6 @@ use serde::Deserialize;
use tracing::instrument;
use crate::prelude::*;
use crate::util::config::{load_config_from_paths, local_config_path};
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct SdkContextConfig {
pub developer_key_path: Option<PathBuf>,
}
#[derive(Debug)]
pub struct SdkContextSeed {
@@ -26,7 +19,7 @@ pub struct SdkContext(Arc<SdkContextSeed>);
impl SdkContext {
/// BLOCKING
#[instrument(skip_all)]
pub fn init(matches: &ArgMatches) -> Result<Self, crate::Error> {
pub fn init(config: ) -> Result<Self, crate::Error> {
let local_config_path = local_config_path();
let base: SdkContextConfig = load_config_from_paths(
matches
@@ -48,24 +41,7 @@ impl SdkContext {
}),
})))
}
/// BLOCKING
#[instrument(skip_all)]
pub fn developer_key(&self) -> Result<ed25519_dalek::SigningKey, Error> {
if !self.developer_key_path.exists() {
return Err(Error::new(eyre!("Developer Key does not exist! Please run `start-sdk init` before running this command."), crate::ErrorKind::Uninitialized));
}
let pair = <ed25519::KeypairBytes as ed25519::pkcs8::DecodePrivateKey>::from_pkcs8_pem(
&std::fs::read_to_string(&self.developer_key_path)?,
)
.with_kind(crate::ErrorKind::Pem)?;
let secret = ed25519_dalek::SecretKey::try_from(&pair.secret_key[..]).map_err(|_| {
Error::new(
eyre!("pkcs8 key is of incorrect length"),
ErrorKind::OpenSsl,
)
})?;
Ok(secret.into())
}
}
impl std::ops::Deref for SdkContext {
type Target = SdkContextSeed;

View File

@@ -1,5 +1,5 @@
use std::ops::Deref;
use std::path::{Path, PathBuf};
use std::path::PathBuf;
use std::sync::Arc;
use josekit::jwk::Jwk;
@@ -15,12 +15,12 @@ use tokio::sync::RwLock;
use tracing::instrument;
use crate::account::AccountInfo;
use crate::context::config::ServerConfig;
use crate::db::model::Database;
use crate::disk::OsPartitionInfo;
use crate::init::init_postgres;
use crate::prelude::*;
use crate::setup::SetupStatus;
use crate::util::config::load_config_from_paths;
use crate::{Error, ResultExt};
lazy_static::lazy_static! {
pub static ref CURRENT_SECRET: Jwk = Jwk::generate_ec_key(josekit::jwk::alg::ec::EcCurve::P256).unwrap_or_else(|e| {
@@ -38,45 +38,9 @@ pub struct SetupResult {
pub root_ca: String,
}
#[derive(Debug, Default, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct SetupContextConfig {
pub os_partitions: OsPartitionInfo,
pub migration_batch_rows: Option<usize>,
pub migration_prefetch_rows: Option<usize>,
pub datadir: Option<PathBuf>,
#[serde(default)]
pub disable_encryption: bool,
}
impl SetupContextConfig {
#[instrument(skip_all)]
pub async fn load<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
tokio::task::spawn_blocking(move || {
load_config_from_paths(
path.as_ref()
.into_iter()
.map(|p| p.as_ref())
.chain(std::iter::once(Path::new(
crate::util::config::DEVICE_CONFIG_PATH,
)))
.chain(std::iter::once(Path::new(crate::util::config::CONFIG_PATH))),
)
})
.await
.unwrap()
}
pub fn datadir(&self) -> &Path {
self.datadir
.as_deref()
.unwrap_or_else(|| Path::new("/embassy-data"))
}
}
pub struct SetupContextSeed {
pub config: ServerConfig,
pub os_partitions: OsPartitionInfo,
pub config_path: Option<PathBuf>,
pub migration_batch_rows: usize,
pub migration_prefetch_rows: usize,
pub disable_encryption: bool,
pub shutdown: Sender<()>,
pub datadir: PathBuf,
@@ -96,16 +60,18 @@ impl AsRef<Jwk> for SetupContextSeed {
pub struct SetupContext(Arc<SetupContextSeed>);
impl SetupContext {
#[instrument(skip_all)]
pub async fn init<P: AsRef<Path> + Send + 'static>(path: Option<P>) -> Result<Self, Error> {
let cfg = SetupContextConfig::load(path.as_ref().map(|p| p.as_ref().to_owned())).await?;
pub fn init(config: &ServerConfig) -> Result<Self, Error> {
let (shutdown, _) = tokio::sync::broadcast::channel(1);
let datadir = cfg.datadir().to_owned();
let datadir = config.datadir().to_owned();
Ok(Self(Arc::new(SetupContextSeed {
os_partitions: cfg.os_partitions,
config_path: path.as_ref().map(|p| p.as_ref().to_owned()),
migration_batch_rows: cfg.migration_batch_rows.unwrap_or(25000),
migration_prefetch_rows: cfg.migration_prefetch_rows.unwrap_or(100_000),
disable_encryption: cfg.disable_encryption,
config: config.clone(),
os_partitions: config.os_partitions.clone().ok_or_else(|| {
Error::new(
eyre!("missing required configuration: `os-partitions`"),
ErrorKind::NotFound,
)
})?,
disable_encryption: config.disable_encryption.unwrap_or(false),
shutdown,
datadir,
selected_v2_drive: RwLock::new(None),

View File

@@ -1,89 +1,52 @@
use clap::Parser;
use color_eyre::eyre::eyre;
use models::PackageId;
use rpc_toolkit::command;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::status::MainStatus;
use crate::util::display_none;
use crate::Error;
#[command(display(display_none), metadata(sync_db = true))]
#[instrument(skip_all)]
pub async fn start(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<(), Error> {
let peek = ctx.db.peek().await;
let version = peek
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.as_installed()
.or_not_found(&id)?
.as_manifest()
.as_version()
.de()?;
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ControlParams {
pub id: PackageId,
}
ctx.managers
.get(&(id, version))
#[instrument(skip_all)]
pub async fn start(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services
.get(&id)
.await
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.as_ref()
.or_not_found(lazy_format!("Manager for {id}"))?
.start()
.await;
Ok(())
}
#[command(display(display_none), metadata(sync_db = true))]
pub async fn stop(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<MainStatus, Error> {
let peek = ctx.db.peek().await;
let version = peek
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.as_installed()
.or_not_found(&id)?
.as_manifest()
.as_version()
.de()?;
let last_statuts = ctx
.db
.mutate(|v| {
v.as_package_data_mut()
.as_idx_mut(&id)
.and_then(|x| x.as_installed_mut())
.ok_or_else(|| Error::new(eyre!("{} is not installed", id), ErrorKind::NotFound))?
.as_status_mut()
.as_main_mut()
.replace(&MainStatus::Stopping)
})
.await?;
ctx.managers
.get(&(id, version))
pub async fn stop(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
// TODO: why did this return last_status before?
ctx.services
.get(&id)
.await
.as_ref()
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.stop()
.await;
Ok(last_statuts)
Ok(())
}
#[command(display(display_none), metadata(sync_db = true))]
pub async fn restart(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Result<(), Error> {
let peek = ctx.db.peek().await;
let version = peek
.as_package_data()
.as_idx(&id)
.or_not_found(&id)?
.expect_as_installed()?
.as_manifest()
.as_version()
.de()?;
ctx.managers
.get(&(id, version))
pub async fn restart(ctx: RpcContext, ControlParams { id }: ControlParams) -> Result<(), Error> {
ctx.services
.get(&id)
.await
.as_ref()
.ok_or_else(|| Error::new(eyre!("Manager not found"), crate::ErrorKind::InvalidRequest))?
.restart()
.await;

View File

@@ -1,27 +1,21 @@
use std::sync::Arc;
use std::time::Duration;
use axum::extract::ws::WebSocket;
use axum::extract::Request;
use axum::response::Response;
use futures::future::BoxFuture;
use futures::FutureExt;
use helpers::TimedResource;
use hyper::upgrade::Upgraded;
use hyper::{Body, Error as HyperError, Request, Response};
use rand::RngCore;
use tokio::task::JoinError;
use tokio_tungstenite::WebSocketStream;
use imbl_value::InternedString;
use crate::{Error, ResultExt};
#[allow(unused_imports)]
use crate::prelude::*;
use crate::util::new_guid;
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, serde::Serialize, serde::Deserialize)]
pub struct RequestGuid<T: AsRef<str> = String>(Arc<T>);
pub struct RequestGuid(InternedString);
impl RequestGuid {
pub fn new() -> Self {
let mut buf = [0; 40];
rand::thread_rng().fill_bytes(&mut buf);
RequestGuid(Arc::new(base32::encode(
base32::Alphabet::RFC4648 { padding: false },
&buf,
)))
Self(new_guid())
}
pub fn from(r: &str) -> Option<RequestGuid> {
@@ -33,9 +27,15 @@ impl RequestGuid {
return None;
}
}
Some(RequestGuid(Arc::new(r.to_owned())))
Some(RequestGuid(InternedString::intern(r)))
}
}
impl AsRef<str> for RequestGuid {
fn as_ref(&self) -> &str {
self.0.as_ref()
}
}
#[test]
fn parse_guid() {
println!(
@@ -44,22 +44,16 @@ fn parse_guid() {
)
}
impl<T: AsRef<str>> std::fmt::Display for RequestGuid<T> {
impl std::fmt::Display for RequestGuid {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
(&*self.0).as_ref().fmt(f)
self.0.fmt(f)
}
}
pub type RestHandler = Box<
dyn FnOnce(Request<Body>) -> BoxFuture<'static, Result<Response<Body>, crate::Error>> + Send,
>;
pub type RestHandler =
Box<dyn FnOnce(Request) -> BoxFuture<'static, Result<Response, crate::Error>> + Send>;
pub type WebSocketHandler = Box<
dyn FnOnce(
BoxFuture<'static, Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
) -> BoxFuture<'static, Result<(), Error>>
+ Send,
>;
pub type WebSocketHandler = Box<dyn FnOnce(WebSocket) -> BoxFuture<'static, ()> + Send>;
pub enum RpcContinuation {
Rest(TimedResource<RestHandler>),
@@ -78,39 +72,4 @@ impl RpcContinuation {
RpcContinuation::WebSocket(a) => a.is_timed_out(),
}
}
pub async fn into_handler(self) -> Option<RestHandler> {
match self {
RpcContinuation::Rest(handler) => handler.get().await,
RpcContinuation::WebSocket(handler) => {
if let Some(handler) = handler.get().await {
Some(Box::new(
|req: Request<Body>| -> BoxFuture<'static, Result<Response<Body>, Error>> {
async move {
let (parts, body) = req.into_parts();
let req = Request::from_parts(parts, body);
let (res, ws_fut) = hyper_ws_listener::create_ws(req)
.with_kind(crate::ErrorKind::Network)?;
if let Some(ws_fut) = ws_fut {
tokio::task::spawn(async move {
match handler(ws_fut.boxed()).await {
Ok(()) => (),
Err(e) => {
tracing::error!("WebSocket Closed: {}", e);
tracing::debug!("{:?}", e);
}
}
});
}
Ok(res)
}
.boxed()
},
))
} else {
None
}
}
}
}
}

View File

@@ -1,61 +1,52 @@
pub mod model;
pub mod package;
pub mod prelude;
use std::future::Future;
use std::path::PathBuf;
use std::sync::Arc;
use futures::{FutureExt, SinkExt, StreamExt};
use axum::extract::ws::{self, WebSocket};
use axum::extract::WebSocketUpgrade;
use axum::response::Response;
use clap::Parser;
use futures::{FutureExt, StreamExt};
use http::header::COOKIE;
use http::HeaderMap;
use patch_db::json_ptr::JsonPointer;
use patch_db::{Dump, Revision};
use rpc_toolkit::command;
use rpc_toolkit::hyper::upgrade::Upgraded;
use rpc_toolkit::hyper::{Body, Error as HyperError, Request, Response};
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, from_fn_async, CallRemote, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use tokio::sync::oneshot;
use tokio::task::JoinError;
use tokio_tungstenite::tungstenite::protocol::frame::coding::CloseCode;
use tokio_tungstenite::tungstenite::protocol::CloseFrame;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::WebSocketStream;
use tracing::instrument;
use crate::context::{CliContext, RpcContext};
use crate::middleware::auth::{HasValidSession, HashSessionToken};
use crate::prelude::*;
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat};
use crate::util::serde::{apply_expr, HandlerExtSerde};
#[instrument(skip_all)]
async fn ws_handler<
WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
>(
async fn ws_handler(
ctx: RpcContext,
session: Option<(HasValidSession, HashSessionToken)>,
ws_fut: WSFut,
mut stream: WebSocket,
) -> Result<(), Error> {
let (dump, sub) = ctx.db.dump_and_sub().await;
let mut stream = ws_fut
.await
.with_kind(ErrorKind::Network)?
.with_kind(ErrorKind::Unknown)?;
if let Some((session, token)) = session {
let kill = subscribe_to_session_kill(&ctx, token).await;
send_dump(session, &mut stream, dump).await?;
send_dump(session.clone(), &mut stream, dump).await?;
deal_with_messages(session, kill, sub, stream).await?;
} else {
stream
.close(Some(CloseFrame {
code: CloseCode::Error,
.send(ws::Message::Close(Some(ws::CloseFrame {
code: ws::close_code::ERROR,
reason: "UNAUTHORIZED".into(),
}))
})))
.await
.with_kind(ErrorKind::Network)?;
drop(stream);
}
Ok(())
@@ -80,7 +71,7 @@ async fn deal_with_messages(
_has_valid_authentication: HasValidSession,
mut kill: oneshot::Receiver<()>,
mut sub: patch_db::Subscriber,
mut stream: WebSocketStream<Upgraded>,
mut stream: WebSocket,
) -> Result<(), Error> {
let mut timer = tokio::time::interval(tokio::time::Duration::from_secs(5));
@@ -89,18 +80,18 @@ async fn deal_with_messages(
_ = (&mut kill).fuse() => {
tracing::info!("Closing WebSocket: Reason: Session Terminated");
stream
.close(Some(CloseFrame {
code: CloseCode::Error,
reason: "UNAUTHORIZED".into(),
}))
.await
.with_kind(ErrorKind::Network)?;
.send(ws::Message::Close(Some(ws::CloseFrame {
code: ws::close_code::ERROR,
reason: "UNAUTHORIZED".into(),
}))).await
.with_kind(ErrorKind::Network)?;
drop(stream);
return Ok(())
}
new_rev = sub.recv().fuse() => {
let rev = new_rev.expect("UNREACHABLE: patch-db is dropped");
stream
.send(Message::Text(serde_json::to_string(&rev).with_kind(ErrorKind::Serialization)?))
.send(ws::Message::Text(serde_json::to_string(&rev).with_kind(ErrorKind::Serialization)?))
.await
.with_kind(ErrorKind::Network)?;
}
@@ -117,7 +108,7 @@ async fn deal_with_messages(
// This is trying to give a health checks to the home to keep the ui alive.
_ = timer.tick().fuse() => {
stream
.send(Message::Ping(vec![]))
.send(ws::Message::Ping(vec![]))
.await
.with_kind(crate::ErrorKind::Network)?;
}
@@ -127,11 +118,11 @@ async fn deal_with_messages(
async fn send_dump(
_has_valid_authentication: HasValidSession,
stream: &mut WebSocketStream<Upgraded>,
stream: &mut WebSocket,
dump: Dump,
) -> Result<(), Error> {
stream
.send(Message::Text(
.send(ws::Message::Text(
serde_json::to_string(&dump).with_kind(ErrorKind::Serialization)?,
))
.await
@@ -139,11 +130,14 @@ async fn send_dump(
Ok(())
}
pub async fn subscribe(ctx: RpcContext, req: Request<Body>) -> Result<Response<Body>, Error> {
let (parts, body) = req.into_parts();
pub async fn subscribe(
ctx: RpcContext,
headers: HeaderMap,
ws: WebSocketUpgrade,
) -> Result<Response, Error> {
let session = match async {
let token = HashSessionToken::from_request_parts(&parts)?;
let session = HasValidSession::from_request_parts(&parts, &ctx).await?;
let token = HashSessionToken::from_header(headers.get(COOKIE))?;
let session = HasValidSession::from_header(headers.get(COOKIE), &ctx).await?;
Ok::<_, Error>((session, token))
}
.await
@@ -157,26 +151,24 @@ pub async fn subscribe(ctx: RpcContext, req: Request<Body>) -> Result<Response<B
None
}
};
let req = Request::from_parts(parts, body);
let (res, ws_fut) = hyper_ws_listener::create_ws(req).with_kind(ErrorKind::Network)?;
if let Some(ws_fut) = ws_fut {
tokio::task::spawn(async move {
match ws_handler(ctx, session, ws_fut).await {
Ok(()) => (),
Err(e) => {
tracing::error!("WebSocket Closed: {}", e);
tracing::debug!("{:?}", e);
}
Ok(ws.on_upgrade(|ws| async move {
match ws_handler(ctx, session, ws).await {
Ok(()) => (),
Err(e) => {
tracing::error!("WebSocket Closed: {}", e);
tracing::debug!("{:?}", e);
}
});
}
Ok(res)
}
}))
}
#[command(subcommands(dump, put, apply))]
pub fn db() -> Result<(), RpcError> {
Ok(())
pub fn db() -> ParentHandler {
ParentHandler::new()
.subcommand("dump", from_fn_async(cli_dump).with_display_serializable())
.subcommand("dump", from_fn_async(dump).no_cli())
.subcommand("put", put())
.subcommand("apply", from_fn_async(cli_apply).no_display())
.subcommand("apply", from_fn_async(apply).no_cli())
}
#[derive(Deserialize, Serialize)]
@@ -187,96 +179,36 @@ pub enum RevisionsRes {
}
#[instrument(skip_all)]
async fn cli_dump(
ctx: CliContext,
_format: Option<IoFormat>,
path: Option<PathBuf>,
) -> Result<Dump, RpcError> {
async fn cli_dump(ctx: CliContext, DumpParams { path }: DumpParams) -> Result<Dump, RpcError> {
let dump = if let Some(path) = path {
PatchDb::open(path).await?.dump().await
} else {
rpc_toolkit::command_helpers::call_remote(
ctx,
"db.dump",
serde_json::json!({}),
std::marker::PhantomData::<Dump>,
)
.await?
.result?
from_value::<Dump>(ctx.call_remote("db.dump", imbl_value::json!({})).await?)?
};
Ok(dump)
}
#[command(
custom_cli(cli_dump(async, context(CliContext))),
display(display_serializable)
)]
pub async fn dump(
#[context] ctx: RpcContext,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
#[allow(unused_variables)]
#[arg]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct DumpParams {
path: Option<PathBuf>,
) -> Result<Dump, Error> {
}
// #[command(
// custom_cli(cli_dump(async, context(CliContext))),
// display(display_serializable)
// )]
pub async fn dump(ctx: RpcContext, _: DumpParams) -> Result<Dump, Error> {
Ok(ctx.db.dump().await)
}
fn apply_expr(input: jaq_core::Val, expr: &str) -> Result<jaq_core::Val, Error> {
let (expr, errs) = jaq_core::parse::parse(expr, jaq_core::parse::main());
let Some(expr) = expr else {
return Err(Error::new(
eyre!("Failed to parse expression: {:?}", errs),
crate::ErrorKind::InvalidRequest,
));
};
let mut errs = Vec::new();
let mut defs = jaq_core::Definitions::core();
for def in jaq_std::std() {
defs.insert(def, &mut errs);
}
let filter = defs.finish(expr, Vec::new(), &mut errs);
if !errs.is_empty() {
return Err(Error::new(
eyre!("Failed to compile expression: {:?}", errs),
crate::ErrorKind::InvalidRequest,
));
};
let inputs = jaq_core::RcIter::new(std::iter::empty());
let mut res_iter = filter.run(jaq_core::Ctx::new([], &inputs), input);
let Some(res) = res_iter
.next()
.transpose()
.map_err(|e| eyre!("{e}"))
.with_kind(crate::ErrorKind::Deserialization)?
else {
return Err(Error::new(
eyre!("expr returned no results"),
crate::ErrorKind::InvalidRequest,
));
};
if res_iter.next().is_some() {
return Err(Error::new(
eyre!("expr returned too many results"),
crate::ErrorKind::InvalidRequest,
));
}
Ok(res)
}
#[instrument(skip_all)]
async fn cli_apply(ctx: CliContext, expr: String, path: Option<PathBuf>) -> Result<(), RpcError> {
async fn cli_apply(
ctx: CliContext,
ApplyParams { expr, path }: ApplyParams,
) -> Result<(), RpcError> {
if let Some(path) = path {
PatchDb::open(path)
.await?
@@ -301,30 +233,22 @@ async fn cli_apply(ctx: CliContext, expr: String, path: Option<PathBuf>) -> Resu
})
.await?;
} else {
rpc_toolkit::command_helpers::call_remote(
ctx,
"db.apply",
serde_json::json!({ "expr": expr }),
std::marker::PhantomData::<()>,
)
.await?
.result?;
ctx.call_remote("db.apply", imbl_value::json!({ "expr": expr }))
.await?;
}
Ok(())
}
#[command(
custom_cli(cli_apply(async, context(CliContext))),
display(display_none)
)]
pub async fn apply(
#[context] ctx: RpcContext,
#[arg] expr: String,
#[allow(unused_variables)]
#[arg]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ApplyParams {
expr: String,
path: Option<PathBuf>,
) -> Result<(), Error> {
}
pub async fn apply(ctx: RpcContext, ApplyParams { expr, .. }: ApplyParams) -> Result<(), Error> {
ctx.db
.mutate(|db| {
let res = apply_expr(
@@ -346,21 +270,25 @@ pub async fn apply(
.await
}
#[command(subcommands(ui))]
pub fn put() -> Result<(), RpcError> {
Ok(())
pub fn put() -> ParentHandler {
ParentHandler::new().subcommand(
"ui",
from_fn_async(ui)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct UiParams {
pointer: JsonPointer,
value: Value,
}
#[command(display(display_serializable))]
// #[command(display(display_serializable))]
#[instrument(skip_all)]
pub async fn ui(
#[context] ctx: RpcContext,
#[arg] pointer: JsonPointer,
#[arg] value: Value,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<(), Error> {
pub async fn ui(ctx: RpcContext, UiParams { pointer, value, .. }: UiParams) -> Result<(), Error> {
let ptr = "/ui"
.parse::<JsonPointer>()
.with_kind(ErrorKind::Database)?

View File

@@ -1,6 +1,5 @@
use std::collections::{BTreeMap, BTreeSet};
use std::net::{Ipv4Addr, Ipv6Addr};
use std::sync::Arc;
use chrono::{DateTime, Utc};
use emver::VersionRange;
@@ -8,8 +7,9 @@ use imbl_value::InternedString;
use ipnet::{Ipv4Net, Ipv6Net};
use isocountry::CountryCode;
use itertools::Itertools;
use models::{DataUrl, HealthCheckId, InterfaceId};
use models::{DataUrl, HealthCheckId, InterfaceId, PackageId};
use openssl::hash::MessageDigest;
use patch_db::json_ptr::JsonPointer;
use patch_db::{HasModel, Value};
use reqwest::Url;
use serde::{Deserialize, Serialize};
@@ -17,12 +17,12 @@ use ssh_key::public::Ed25519PublicKey;
use crate::account::AccountInfo;
use crate::config::spec::PackagePointerSpec;
use crate::install::progress::InstallProgress;
use crate::net::utils::{get_iface_ipv4_addr, get_iface_ipv6_addr};
use crate::prelude::*;
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::progress::FullProgress;
use crate::s9pk::manifest::Manifest;
use crate::status::Status;
use crate::util::cpupower::{Governor};
use crate::util::cpupower::Governor;
use crate::util::Version;
use crate::version::{Current, VersionT};
use crate::{ARCH, PLATFORM};
@@ -225,14 +225,14 @@ impl Map for AllPackageData {
pub struct StaticFiles {
license: String,
instructions: String,
icon: String,
icon: DataUrl<'static>,
}
impl StaticFiles {
pub fn local(id: &PackageId, version: &Version, icon_type: &str) -> Self {
pub fn local(id: &PackageId, version: &Version, icon: DataUrl<'static>) -> Self {
StaticFiles {
license: format!("/public/package-data/{}/{}/LICENSE.md", id, version),
instructions: format!("/public/package-data/{}/{}/INSTRUCTIONS.md", id, version),
icon: format!("/public/package-data/{}/{}/icon.{}", id, version, icon_type),
icon,
}
}
}
@@ -243,7 +243,7 @@ impl StaticFiles {
pub struct PackageDataEntryInstalling {
pub static_files: StaticFiles,
pub manifest: Manifest,
pub install_progress: Arc<InstallProgress>,
pub install_progress: FullProgress,
}
#[derive(Debug, Deserialize, Serialize, HasModel)]
@@ -253,7 +253,7 @@ pub struct PackageDataEntryUpdating {
pub static_files: StaticFiles,
pub manifest: Manifest,
pub installed: InstalledPackageInfo,
pub install_progress: Arc<InstallProgress>,
pub install_progress: FullProgress,
}
#[derive(Debug, Deserialize, Serialize, HasModel)]
@@ -262,7 +262,7 @@ pub struct PackageDataEntryUpdating {
pub struct PackageDataEntryRestoring {
pub static_files: StaticFiles,
pub manifest: Manifest,
pub install_progress: Arc<InstallProgress>,
pub install_progress: FullProgress,
}
#[derive(Debug, Deserialize, Serialize, HasModel)]
@@ -422,7 +422,7 @@ impl Model<PackageDataEntry> {
PackageDataEntryMatchModelMut::Error(_) => None,
}
}
pub fn as_install_progress(&self) -> Option<&Model<Arc<InstallProgress>>> {
pub fn as_install_progress(&self) -> Option<&Model<FullProgress>> {
match self.as_match() {
PackageDataEntryMatchModelRef::Installing(a) => Some(a.as_install_progress()),
PackageDataEntryMatchModelRef::Updating(a) => Some(a.as_install_progress()),
@@ -432,7 +432,7 @@ impl Model<PackageDataEntry> {
PackageDataEntryMatchModelRef::Error(_) => None,
}
}
pub fn as_install_progress_mut(&mut self) -> Option<&mut Model<Arc<InstallProgress>>> {
pub fn as_install_progress_mut(&mut self) -> Option<&mut Model<FullProgress>> {
match self.as_match_mut() {
PackageDataEntryMatchModelMut::Installing(a) => Some(a.as_install_progress_mut()),
PackageDataEntryMatchModelMut::Updating(a) => Some(a.as_install_progress_mut()),
@@ -459,6 +459,29 @@ pub struct InstalledPackageInfo {
pub current_dependents: CurrentDependents,
pub current_dependencies: CurrentDependencies,
pub interface_addresses: InterfaceAddressMap,
pub store: Value,
pub store_exposed_ui: Vec<ExposedUI>,
pub store_exposed_dependents: Vec<JsonPointer>,
}
#[derive(Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
pub struct ExposedDependent {
path: String,
title: String,
description: Option<String>,
masked: Option<bool>,
copyable: Option<bool>,
qr: Option<bool>,
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[model = "Model<Self>"]
pub struct ExposedUI {
path: Vec<JsonPointer>,
title: String,
description: Option<String>,
masked: Option<bool>,
copyable: Option<bool>,
qr: Option<bool>,
}
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
@@ -478,7 +501,6 @@ impl Map for CurrentDependents {
type Key = PackageId;
type Value = CurrentDependencyInfo;
}
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
pub struct CurrentDependencies(pub BTreeMap<PackageId, CurrentDependencyInfo>);
impl CurrentDependencies {
@@ -514,7 +536,7 @@ pub struct CurrentDependencyInfo {
pub health_checks: BTreeSet<HealthCheckId>,
}
#[derive(Debug, Deserialize, Serialize)]
#[derive(Debug, Default, Deserialize, Serialize)]
pub struct InterfaceAddressMap(pub BTreeMap<InterfaceId, InterfaceAddresses>);
impl Map for InterfaceAddressMap {
type Key = InterfaceId;

View File

@@ -1,22 +0,0 @@
use models::Version;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
pub fn get_packages(db: Peeked) -> Result<Vec<(PackageId, Version)>, Error> {
Ok(db
.as_package_data()
.keys()?
.into_iter()
.flat_map(|package_id| {
let version = db
.as_package_data()
.as_idx(&package_id)?
.as_manifest()
.as_version()
.de()
.ok()?;
Some((package_id, version))
})
.collect())
}

View File

@@ -2,8 +2,9 @@ use std::collections::BTreeMap;
use std::marker::PhantomData;
use std::panic::UnwindSafe;
pub use imbl_value::Value;
use patch_db::value::InternedString;
pub use patch_db::{HasModel, PatchDb, Value};
pub use patch_db::{HasModel, PatchDb};
use serde::de::DeserializeOwned;
use serde::Serialize;

View File

@@ -1,31 +1,26 @@
use std::collections::BTreeMap;
use std::time::Duration;
use color_eyre::eyre::eyre;
use clap::Parser;
use emver::VersionRange;
use models::OptionExt;
use rand::SeedableRng;
use rpc_toolkit::command;
use models::{OptionExt, PackageId};
use rpc_toolkit::{command, from_fn_async, Empty, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::config::action::ConfigRes;
use crate::config::spec::PackagePointerSpec;
use crate::config::{not_found, Config, ConfigSpec, ConfigureContext};
use crate::context::RpcContext;
use crate::config::{Config, ConfigSpec, ConfigureContext};
use crate::context::{CliContext, RpcContext};
use crate::db::model::{CurrentDependencies, Database};
use crate::prelude::*;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::s9pk::manifest::Manifest;
use crate::status::DependencyConfigErrors;
use crate::util::serde::display_serializable;
use crate::util::{display_none, Version};
use crate::volume::Volumes;
use crate::util::serde::HandlerExtSerde;
use crate::util::Version;
use crate::Error;
#[command(subcommands(configure))]
pub fn dependency() -> Result<(), Error> {
Ok(())
pub fn dependency() -> ParentHandler {
ParentHandler::new().subcommand("configure", configure())
}
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel)]
@@ -58,77 +53,41 @@ pub struct DepInfo {
pub requirement: DependencyRequirement,
pub description: Option<String>,
#[serde(default)]
pub config: Option<DependencyConfig>,
pub config: Option<Value>, // TODO: remove
}
#[derive(Clone, Debug, Deserialize, Serialize, HasModel)]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[model = "Model<Self>"]
pub struct DependencyConfig {
check: PackageProcedure,
auto_configure: PackageProcedure,
#[command(rename_all = "kebab-case")]
pub struct ConfigureParams {
#[arg(name = "dependent-id")]
dependent_id: PackageId,
#[arg(name = "dependency-id")]
dependency_id: PackageId,
}
impl DependencyConfig {
pub async fn check(
&self,
ctx: &RpcContext,
dependent_id: &PackageId,
dependent_version: &Version,
dependent_volumes: &Volumes,
dependency_id: &PackageId,
dependency_config: &Config,
) -> Result<Result<NoOutput, String>, Error> {
Ok(self
.check
.sandboxed(
ctx,
dependent_id,
dependent_version,
dependent_volumes,
Some(dependency_config),
None,
ProcedureName::Check(dependency_id.clone()),
)
.await?
.map_err(|(_, e)| e))
}
pub async fn auto_configure(
&self,
ctx: &RpcContext,
dependent_id: &PackageId,
dependent_version: &Version,
dependent_volumes: &Volumes,
old: &Config,
) -> Result<Config, Error> {
self.auto_configure
.sandboxed(
ctx,
dependent_id,
dependent_version,
dependent_volumes,
Some(old),
None,
ProcedureName::AutoConfig(dependent_id.clone()),
)
.await?
.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))
}
}
#[command(
subcommands(self(configure_impl(async)), configure_dry),
display(display_none)
)]
pub async fn configure(
#[arg(rename = "dependent-id")] dependent_id: PackageId,
#[arg(rename = "dependency-id")] dependency_id: PackageId,
) -> Result<(PackageId, PackageId), Error> {
Ok((dependent_id, dependency_id))
pub fn configure() -> ParentHandler<ConfigureParams> {
ParentHandler::new()
.root_handler(
from_fn_async(configure_impl)
.with_inherited(|params, _| params)
.no_cli(),
)
.subcommand(
"dry",
from_fn_async(configure_dry)
.with_inherited(|params, _| params)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
}
pub async fn configure_impl(
ctx: RpcContext,
(pkg_id, dep_id): (PackageId, PackageId),
_: Empty,
ConfigureParams {
dependent_id,
dependency_id,
}: ConfigureParams,
) -> Result<(), Error> {
let breakages = BTreeMap::new();
let overrides = Default::default();
@@ -136,7 +95,7 @@ pub async fn configure_impl(
old_config: _,
new_config,
spec: _,
} = configure_logic(ctx.clone(), (pkg_id, dep_id.clone())).await?;
} = configure_logic(ctx.clone(), (dependent_id, dependency_id.clone())).await?;
let configure_context = ConfigureContext {
breakages,
@@ -145,7 +104,18 @@ pub async fn configure_impl(
dry_run: false,
overrides,
};
crate::config::configure(&ctx, &dep_id, configure_context).await?;
ctx.services
.get(&dependency_id)
.await
.as_ref()
.ok_or_else(|| {
Error::new(
eyre!("There is no manager running for {dependency_id}"),
ErrorKind::Unknown,
)
})?
.configure(configure_context)
.await?;
Ok(())
}
@@ -157,90 +127,95 @@ pub struct ConfigDryRes {
pub spec: ConfigSpec,
}
#[command(rename = "dry", display(display_serializable))]
// #[command(rename = "dry", display(display_serializable))]
#[instrument(skip_all)]
pub async fn configure_dry(
#[context] ctx: RpcContext,
#[parent_data] (pkg_id, dependency_id): (PackageId, PackageId),
ctx: RpcContext,
_: Empty,
ConfigureParams {
dependent_id,
dependency_id,
}: ConfigureParams,
) -> Result<ConfigDryRes, Error> {
configure_logic(ctx, (pkg_id, dependency_id)).await
configure_logic(ctx, (dependent_id, dependency_id)).await
}
pub async fn configure_logic(
ctx: RpcContext,
(pkg_id, dependency_id): (PackageId, PackageId),
(dependent_id, dependency_id): (PackageId, PackageId),
) -> Result<ConfigDryRes, Error> {
let db = ctx.db.peek().await;
let pkg = db
.as_package_data()
.as_idx(&pkg_id)
.or_not_found(&pkg_id)?
.as_installed()
.or_not_found(&pkg_id)?;
let pkg_version = pkg.as_manifest().as_version().de()?;
let pkg_volumes = pkg.as_manifest().as_volumes().de()?;
let dependency = db
.as_package_data()
.as_idx(&dependency_id)
.or_not_found(&dependency_id)?
.as_installed()
.or_not_found(&dependency_id)?;
let dependency_config_action = dependency
.as_manifest()
.as_config()
.de()?
.ok_or_else(|| not_found!("Manifest Config"))?;
let dependency_version = dependency.as_manifest().as_version().de()?;
let dependency_volumes = dependency.as_manifest().as_volumes().de()?;
let dependency = pkg
.as_manifest()
.as_dependencies()
.as_idx(&dependency_id)
.or_not_found(&dependency_id)?;
// let db = ctx.db.peek().await;
// let pkg = db
// .as_package_data()
// .as_idx(&pkg_id)
// .or_not_found(&pkg_id)?
// .as_installed()
// .or_not_found(&pkg_id)?;
// let pkg_version = pkg.as_manifest().as_version().de()?;
// let pkg_volumes = pkg.as_manifest().as_volumes().de()?;
// let dependency = db
// .as_package_data()
// .as_idx(&dependency_id)
// .or_not_found(&dependency_id)?
// .as_installed()
// .or_not_found(&dependency_id)?;
// let dependency_config_action = dependency
// .as_manifest()
// .as_config()
// .de()?
// .ok_or_else(|| not_found!("Manifest Config"))?;
// let dependency_version = dependency.as_manifest().as_version().de()?;
// let dependency_volumes = dependency.as_manifest().as_volumes().de()?;
// let dependency = pkg
// .as_manifest()
// .as_dependencies()
// .as_idx(&dependency_id)
// .or_not_found(&dependency_id)?;
let ConfigRes {
config: maybe_config,
spec,
} = dependency_config_action
.get(
&ctx,
&dependency_id,
&dependency_version,
&dependency_volumes,
)
.await?;
// let ConfigRes {
// config: maybe_config,
// spec,
// } = dependency_config_action
// .get(
// &ctx,
// &dependency_id,
// &dependency_version,
// &dependency_volumes,
// )
// .await?;
let old_config = if let Some(config) = maybe_config {
config
} else {
spec.gen(
&mut rand::rngs::StdRng::from_entropy(),
&Some(Duration::new(10, 0)),
)?
};
// let old_config = if let Some(config) = maybe_config {
// config
// } else {
// spec.gen(
// &mut rand::rngs::StdRng::from_entropy(),
// &Some(Duration::new(10, 0)),
// )?
// };
let new_config = dependency
.as_config()
.de()?
.ok_or_else(|| not_found!("Config"))?
.auto_configure
.sandboxed(
&ctx,
&pkg_id,
&pkg_version,
&pkg_volumes,
Some(&old_config),
None,
ProcedureName::AutoConfig(dependency_id.clone()),
)
.await?
.map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))?;
// let new_config = dependency
// .as_config()
// .de()?
// .ok_or_else(|| not_found!("Config"))?
// .auto_configure
// .sandboxed(
// &ctx,
// &pkg_id,
// &pkg_version,
// &pkg_volumes,
// Some(&old_config),
// None,
// ProcedureName::AutoConfig(dependency_id.clone()),
// )
// .await?
// .map_err(|e| Error::new(eyre!("{}", e.1), crate::ErrorKind::AutoConfigure))?;
Ok(ConfigDryRes {
old_config,
new_config,
spec,
})
// Ok(ConfigDryRes {
// old_config,
// new_config,
// spec,
// })
todo!()
}
#[instrument(skip_all)]
@@ -324,36 +299,7 @@ pub async fn compute_dependency_config_errs(
.or_not_found(dependency)?
.config
{
if let Err(error) = cfg
.check(
ctx,
&manifest.id,
&manifest.version,
&manifest.volumes,
dependency,
&if let Some(config) = dependency_config.get(dependency) {
config.clone()
} else if let Some(manifest) = db
.as_package_data()
.as_idx(dependency)
.and_then(|pde| pde.as_installed())
.map(|i| i.as_manifest().de())
.transpose()?
{
if let Some(config) = &manifest.config {
config
.get(ctx, &manifest.id, &manifest.version, &manifest.volumes)
.await?
.config
.unwrap_or_default()
} else {
Config::default()
}
} else {
Config::default()
},
)
.await?
let error = todo!();
{
dependency_config_errs.insert(dependency.clone(), error);
}

View File

@@ -5,16 +5,13 @@ use std::path::Path;
use ed25519::pkcs8::EncodePrivateKey;
use ed25519::PublicKeyBytes;
use ed25519_dalek::{SigningKey, VerifyingKey};
use rpc_toolkit::command;
use tracing::instrument;
use crate::context::SdkContext;
use crate::util::display_none;
use crate::context::CliContext;
use crate::{Error, ResultExt};
#[command(cli_only, blocking, display(display_none))]
#[instrument(skip_all)]
pub fn init(#[context] ctx: SdkContext) -> Result<(), Error> {
pub fn init(ctx: CliContext) -> Result<(), Error> {
if !ctx.developer_key_path.exists() {
let parent = ctx.developer_key_path.parent().unwrap_or(Path::new("/"));
if !parent.exists() {
@@ -48,8 +45,3 @@ pub fn init(#[context] ctx: SdkContext) -> Result<(), Error> {
}
Ok(())
}
#[command(subcommands(crate::s9pk::verify, crate::config::verify_spec))]
pub fn verify() -> Result<(), Error> {
Ok(())
}

View File

@@ -1,44 +1,70 @@
use std::path::Path;
use std::sync::Arc;
use rpc_toolkit::command;
use clap::Parser;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, from_fn, from_fn_async, AnyContext, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use crate::context::DiagnosticContext;
use crate::disk::repair;
use crate::context::{CliContext, DiagnosticContext};
use crate::init::SYSTEM_REBUILD_PATH;
use crate::logs::{fetch_logs, LogResponse, LogSource};
use crate::shutdown::Shutdown;
use crate::util::display_none;
use crate::Error;
#[command(subcommands(error, logs, exit, restart, forget_disk, disk, rebuild))]
pub fn diagnostic() -> Result<(), Error> {
Ok(())
pub fn diagnostic() -> ParentHandler {
ParentHandler::new()
.subcommand("error", from_fn(error).with_remote_cli::<CliContext>())
.subcommand("logs", from_fn_async(logs).no_cli())
.subcommand(
"exit",
from_fn(exit).no_display().with_remote_cli::<CliContext>(),
)
.subcommand(
"restart",
from_fn(restart)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand("disk", disk())
.subcommand(
"rebuild",
from_fn_async(rebuild)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
#[command]
pub fn error(#[context] ctx: DiagnosticContext) -> Result<Arc<RpcError>, Error> {
// #[command]
pub fn error(ctx: DiagnosticContext) -> Result<Arc<RpcError>, Error> {
Ok(ctx.error.clone())
}
#[command(rpc_only)]
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct LogsParams {
limit: Option<usize>,
cursor: Option<String>,
before: bool,
}
pub async fn logs(
#[arg] limit: Option<usize>,
#[arg] cursor: Option<String>,
#[arg] before: bool,
_: AnyContext,
LogsParams {
limit,
cursor,
before,
}: LogsParams,
) -> Result<LogResponse, Error> {
Ok(fetch_logs(LogSource::System, limit, cursor, before).await?)
}
#[command(display(display_none))]
pub fn exit(#[context] ctx: DiagnosticContext) -> Result<(), Error> {
pub fn exit(ctx: DiagnosticContext) -> Result<(), Error> {
ctx.shutdown.send(None).expect("receiver dropped");
Ok(())
}
#[command(display(display_none))]
pub fn restart(#[context] ctx: DiagnosticContext) -> Result<(), Error> {
pub fn restart(ctx: DiagnosticContext) -> Result<(), Error> {
ctx.shutdown
.send(Some(Shutdown {
export_args: ctx
@@ -50,20 +76,21 @@ pub fn restart(#[context] ctx: DiagnosticContext) -> Result<(), Error> {
.expect("receiver dropped");
Ok(())
}
#[command(display(display_none))]
pub async fn rebuild(#[context] ctx: DiagnosticContext) -> Result<(), Error> {
pub async fn rebuild(ctx: DiagnosticContext) -> Result<(), Error> {
tokio::fs::write(SYSTEM_REBUILD_PATH, b"").await?;
restart(ctx)
}
#[command(subcommands(forget_disk, repair))]
pub fn disk() -> Result<(), Error> {
Ok(())
pub fn disk() -> ParentHandler {
ParentHandler::new().subcommand(
"forget",
from_fn_async(forget_disk)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
#[command(rename = "forget", display(display_none))]
pub async fn forget_disk() -> Result<(), Error> {
pub async fn forget_disk(_: AnyContext) -> Result<(), Error> {
let disk_guid = Path::new("/media/embassy/config/disk.guid");
if tokio::fs::metadata(disk_guid).await.is_ok() {
tokio::fs::remove_file(disk_guid).await?;

View File

@@ -7,8 +7,8 @@ use tracing::instrument;
use super::fsck::{RepairStrategy, RequiresReboot};
use super::util::pvscan;
use crate::disk::mount::filesystem::block_dev::mount;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::{FileSystem, ReadWrite};
use crate::disk::mount::util::unmount;
use crate::util::Invoke;
use crate::{Error, ErrorKind, ResultExt};
@@ -142,7 +142,9 @@ pub async fn create_fs<P: AsRef<Path>>(
.arg(&blockdev_path)
.invoke(crate::ErrorKind::DiskManagement)
.await?;
mount(&blockdev_path, datadir.as_ref().join(name), ReadWrite).await?;
BlockDev::new(&blockdev_path)
.mount(datadir.as_ref().join(name), ReadWrite)
.await?;
Ok(())
}
@@ -318,7 +320,9 @@ pub async fn mount_fs<P: AsRef<Path>>(
tokio::fs::rename(&tmp_luks_bak, &luks_bak).await?;
}
mount(&blockdev_path, datadir.as_ref().join(name), ReadWrite).await?;
BlockDev::new(&blockdev_path)
.mount(datadir.as_ref().join(name), ReadWrite)
.await?;
Ok(reboot)
}

View File

@@ -1,13 +1,11 @@
use std::path::{Path, PathBuf};
use clap::ArgMatches;
use rpc_toolkit::command;
use rpc_toolkit::{from_fn_async, AnyContext, Empty, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::disk::util::DiskInfo;
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat};
use crate::util::serde::{display_serializable, HandlerExtSerde, WithIoFormat};
use crate::Error;
pub mod fsck;
@@ -42,16 +40,30 @@ impl OsPartitionInfo {
}
}
#[command(subcommands(list, repair))]
pub fn disk() -> Result<(), Error> {
Ok(())
pub fn disk() -> ParentHandler {
ParentHandler::new()
.subcommand(
"list",
from_fn_async(list)
.with_display_serializable()
.with_custom_display_fn::<AnyContext, _>(|handle, result| {
Ok(display_disk_info(handle.params, result))
})
.with_remote_cli::<CliContext>(),
)
.subcommand(
"repair",
from_fn_async(repair)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
fn display_disk_info(info: Vec<DiskInfo>, matches: &ArgMatches) {
fn display_disk_info(params: WithIoFormat<Empty>, args: Vec<DiskInfo>) {
use prettytable::*;
if matches.is_present("format") {
return display_serializable(info, matches);
if let Some(format) = params.format {
return display_serializable(format, args);
}
let mut table = Table::new();
@@ -60,9 +72,9 @@ fn display_disk_info(info: Vec<DiskInfo>, matches: &ArgMatches) {
"LABEL",
"CAPACITY",
"USED",
"EMBASSY OS VERSION"
"STARTOS VERSION"
]);
for disk in info {
for disk in args {
let row = row![
disk.logicalname.display(),
"N/A",
@@ -101,17 +113,11 @@ fn display_disk_info(info: Vec<DiskInfo>, matches: &ArgMatches) {
table.print_tty(false).unwrap();
}
#[command(display(display_disk_info))]
pub async fn list(
#[context] ctx: RpcContext,
#[allow(unused_variables)]
#[arg]
format: Option<IoFormat>,
) -> Result<Vec<DiskInfo>, Error> {
// #[command(display(display_disk_info))]
pub async fn list(ctx: RpcContext, _: Empty) -> Result<Vec<DiskInfo>, Error> {
crate::disk::util::list(&ctx.os_partitions).await
}
#[command(display(display_none))]
pub async fn repair() -> Result<(), Error> {
tokio::fs::write(REPAIR_DISK_PATH, b"").await?;
Ok(())

View File

@@ -1,24 +1,24 @@
use std::path::{Path, PathBuf};
use std::sync::Arc;
use color_eyre::eyre::eyre;
use helpers::AtomicFile;
use models::PackageId;
use tokio::io::AsyncWriteExt;
use tracing::instrument;
use super::filesystem::ecryptfs::EcryptFS;
use super::guard::{GenericMountGuard, TmpMountGuard};
use super::util::{bind, unmount};
use crate::auth::check_password;
use crate::backup::target::BackupInfo;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::SubPath;
use crate::disk::util::EmbassyOsRecoveryInfo;
use crate::middleware::encrypt::{decrypt_slice, encrypt_slice};
use crate::s9pk::manifest::PackageId;
use crate::util::crypto::{decrypt_slice, encrypt_slice};
use crate::util::serde::IoFormat;
use crate::util::FileLock;
use crate::volume::BACKUP_DIR;
use crate::{Error, ErrorKind, ResultExt};
#[derive(Clone, Debug)]
pub struct BackupMountGuard<G: GenericMountGuard> {
backup_disk_mount_guard: Option<G>,
encrypted_guard: Option<TmpMountGuard>,
@@ -29,7 +29,7 @@ pub struct BackupMountGuard<G: GenericMountGuard> {
impl<G: GenericMountGuard> BackupMountGuard<G> {
fn backup_disk_path(&self) -> &Path {
if let Some(guard) = &self.backup_disk_mount_guard {
guard.as_ref()
guard.path()
} else {
unreachable!()
}
@@ -37,7 +37,7 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
#[instrument(skip_all)]
pub async fn mount(backup_disk_mount_guard: G, password: &str) -> Result<Self, Error> {
let backup_disk_path = backup_disk_mount_guard.as_ref();
let backup_disk_path = backup_disk_mount_guard.path();
let unencrypted_metadata_path =
backup_disk_path.join("EmbassyBackups/unencrypted-metadata.cbor");
let mut unencrypted_metadata: EmbassyOsRecoveryInfo =
@@ -108,7 +108,7 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
let encrypted_guard =
TmpMountGuard::mount(&EcryptFS::new(&crypt_path, &enc_key), ReadWrite).await?;
let metadata_path = encrypted_guard.as_ref().join("metadata.cbor");
let metadata_path = encrypted_guard.path().join("metadata.cbor");
let metadata: BackupInfo = if tokio::fs::metadata(&metadata_path).await.is_ok() {
IoFormat::Cbor.from_slice(&tokio::fs::read(&metadata_path).await.with_ctx(|_| {
(
@@ -146,22 +146,13 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
}
#[instrument(skip_all)]
pub async fn mount_package_backup(
&self,
id: &PackageId,
) -> Result<PackageBackupMountGuard, Error> {
let lock = FileLock::new(Path::new(BACKUP_DIR).join(format!("{}.lock", id)), false).await?;
let mountpoint = Path::new(BACKUP_DIR).join(id);
bind(self.as_ref().join(id), &mountpoint, false).await?;
Ok(PackageBackupMountGuard {
mountpoint: Some(mountpoint),
lock: Some(lock),
})
pub fn package_backup(self: &Arc<Self>, id: &PackageId) -> SubPath<Arc<Self>> {
SubPath::new(self.clone(), id)
}
#[instrument(skip_all)]
pub async fn save(&self) -> Result<(), Error> {
let metadata_path = self.as_ref().join("metadata.cbor");
let metadata_path = self.path().join("metadata.cbor");
let backup_disk_path = self.backup_disk_path();
let mut file = AtomicFile::new(&metadata_path, None::<PathBuf>)
.await
@@ -181,7 +172,22 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
}
#[instrument(skip_all)]
pub async fn unmount(mut self) -> Result<(), Error> {
pub async fn save_and_unmount(self) -> Result<(), Error> {
self.save().await?;
self.unmount().await?;
Ok(())
}
}
#[async_trait::async_trait]
impl<G: GenericMountGuard> GenericMountGuard for BackupMountGuard<G> {
fn path(&self) -> &Path {
if let Some(guard) = &self.encrypted_guard {
guard.path()
} else {
unreachable!()
}
}
async fn unmount(mut self) -> Result<(), Error> {
if let Some(guard) = self.encrypted_guard.take() {
guard.unmount().await?;
}
@@ -190,22 +196,6 @@ impl<G: GenericMountGuard> BackupMountGuard<G> {
}
Ok(())
}
#[instrument(skip_all)]
pub async fn save_and_unmount(self) -> Result<(), Error> {
self.save().await?;
self.unmount().await?;
Ok(())
}
}
impl<G: GenericMountGuard> AsRef<Path> for BackupMountGuard<G> {
fn as_ref(&self) -> &Path {
if let Some(guard) = &self.encrypted_guard {
guard.as_ref()
} else {
unreachable!()
}
}
}
impl<G: GenericMountGuard> Drop for BackupMountGuard<G> {
fn drop(&mut self) {
@@ -221,42 +211,3 @@ impl<G: GenericMountGuard> Drop for BackupMountGuard<G> {
});
}
}
pub struct PackageBackupMountGuard {
mountpoint: Option<PathBuf>,
lock: Option<FileLock>,
}
impl PackageBackupMountGuard {
pub async fn unmount(mut self) -> Result<(), Error> {
if let Some(mountpoint) = self.mountpoint.take() {
unmount(&mountpoint).await?;
}
if let Some(lock) = self.lock.take() {
lock.unlock().await?;
}
Ok(())
}
}
impl AsRef<Path> for PackageBackupMountGuard {
fn as_ref(&self) -> &Path {
if let Some(mountpoint) = &self.mountpoint {
mountpoint
} else {
unreachable!()
}
}
}
impl Drop for PackageBackupMountGuard {
fn drop(&mut self) {
let mountpoint = self.mountpoint.take();
let lock = self.lock.take();
tokio::spawn(async move {
if let Some(mountpoint) = mountpoint {
unmount(&mountpoint).await.unwrap();
}
if let Some(lock) = lock {
lock.unlock().await.unwrap();
}
});
}
}

View File

@@ -1,14 +1,12 @@
use std::os::unix::ffi::OsStrExt;
use std::path::Path;
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use sha2::Sha256;
use super::{FileSystem, MountType, ReadOnly};
use crate::disk::mount::util::bind;
use crate::{Error, ResultExt};
use super::FileSystem;
use crate::prelude::*;
pub struct Bind<SrcDir: AsRef<Path>> {
src_dir: SrcDir,
@@ -18,19 +16,16 @@ impl<SrcDir: AsRef<Path>> Bind<SrcDir> {
Self { src_dir }
}
}
#[async_trait]
impl<SrcDir: AsRef<Path> + Send + Sync> FileSystem for Bind<SrcDir> {
async fn mount<P: AsRef<Path> + Send + Sync>(
&self,
mountpoint: P,
mount_type: MountType,
) -> Result<(), Error> {
bind(
self.src_dir.as_ref(),
mountpoint,
matches!(mount_type, ReadOnly),
)
.await
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
Ok(Some(&self.src_dir))
}
fn extra_args(&self) -> impl IntoIterator<Item = impl AsRef<std::ffi::OsStr>> {
["--bind"]
}
async fn pre_mount(&self) -> Result<(), Error> {
tokio::fs::create_dir_all(self.src_dir.as_ref()).await?;
Ok(())
}
async fn source_hash(
&self,

View File

@@ -1,30 +1,13 @@
use std::os::unix::ffi::OsStrExt;
use std::path::Path;
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use super::{FileSystem, MountType, ReadOnly};
use crate::util::Invoke;
use crate::{Error, ResultExt};
pub async fn mount(
logicalname: impl AsRef<Path>,
mountpoint: impl AsRef<Path>,
mount_type: MountType,
) -> Result<(), Error> {
tokio::fs::create_dir_all(mountpoint.as_ref()).await?;
let mut cmd = tokio::process::Command::new("mount");
cmd.arg(logicalname.as_ref()).arg(mountpoint.as_ref());
if mount_type == ReadOnly {
cmd.arg("-o").arg("ro");
}
cmd.invoke(crate::ErrorKind::Filesystem).await?;
Ok(())
}
use super::FileSystem;
use crate::prelude::*;
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
@@ -36,14 +19,9 @@ impl<LogicalName: AsRef<Path>> BlockDev<LogicalName> {
BlockDev { logicalname }
}
}
#[async_trait]
impl<LogicalName: AsRef<Path> + Send + Sync> FileSystem for BlockDev<LogicalName> {
async fn mount<P: AsRef<Path> + Send + Sync>(
&self,
mountpoint: P,
mount_type: MountType,
) -> Result<(), Error> {
mount(self.logicalname.as_ref(), mountpoint, mount_type).await
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
Ok(Some(&self.logicalname))
}
async fn source_hash(
&self,

View File

@@ -2,7 +2,6 @@ use std::net::IpAddr;
use std::os::unix::ffi::OsStrExt;
use std::path::{Path, PathBuf};
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use serde::{Deserialize, Serialize};
@@ -11,7 +10,7 @@ use tokio::process::Command;
use tracing::instrument;
use super::{FileSystem, MountType, ReadOnly};
use crate::disk::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::util::Invoke;
use crate::Error;
@@ -78,9 +77,8 @@ impl Cifs {
Ok(())
}
}
#[async_trait]
impl FileSystem for Cifs {
async fn mount<P: AsRef<std::path::Path> + Send + Sync>(
async fn mount<P: AsRef<std::path::Path> + Send>(
&self,
mountpoint: P,
mount_type: MountType,

View File

@@ -1,33 +1,17 @@
use std::fmt::Display;
use std::os::unix::ffi::OsStrExt;
use std::path::Path;
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use lazy_format::lazy_format;
use sha2::Sha256;
use tokio::process::Command;
use super::{FileSystem, MountType};
use super::FileSystem;
use crate::disk::mount::filesystem::default_mount_command;
use crate::prelude::*;
use crate::util::Invoke;
use crate::{Error, ResultExt};
pub async fn mount_ecryptfs<P0: AsRef<Path>, P1: AsRef<Path>>(
src: P0,
dst: P1,
key: &str,
) -> Result<(), Error> {
tokio::fs::create_dir_all(dst.as_ref()).await?;
tokio::process::Command::new("mount")
.arg("-t")
.arg("ecryptfs")
.arg(src.as_ref())
.arg(dst.as_ref())
.arg("-o")
// for more information `man ecryptfs`
.arg(format!("key=passphrase:passphrase_passwd={},ecryptfs_cipher=aes,ecryptfs_key_bytes=32,ecryptfs_passthrough=n,ecryptfs_enable_filename_crypto=y,no_sig_cache", key))
.input(Some(&mut std::io::Cursor::new(b"\n")))
.invoke(crate::ErrorKind::Filesystem).await?;
Ok(())
}
pub struct EcryptFS<EncryptedDir: AsRef<Path>, Key: AsRef<str>> {
encrypted_dir: EncryptedDir,
@@ -38,16 +22,45 @@ impl<EncryptedDir: AsRef<Path>, Key: AsRef<str>> EcryptFS<EncryptedDir, Key> {
EcryptFS { encrypted_dir, key }
}
}
#[async_trait]
impl<EncryptedDir: AsRef<Path> + Send + Sync, Key: AsRef<str> + Send + Sync> FileSystem
for EcryptFS<EncryptedDir, Key>
{
async fn mount<P: AsRef<Path> + Send + Sync>(
fn mount_type(&self) -> Option<impl AsRef<str>> {
Some("ecryptfs")
}
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
Ok(Some(&self.encrypted_dir))
}
fn mount_options(&self) -> impl IntoIterator<Item = impl Display> {
[
Box::new(lazy_format!(
"key=passphrase:passphrase_passwd={}",
self.key.as_ref()
)) as Box<dyn Display>,
Box::new("ecryptfs_cipher=aes"),
Box::new("ecryptfs_key_bytes=32"),
Box::new("ecryptfs_passthrough=n"),
Box::new("ecryptfs_enable_filename_crypto=y"),
Box::new("no_sig_cache"),
]
}
async fn mount<P: AsRef<Path> + Send>(
&self,
mountpoint: P,
_mount_type: MountType, // ignored - inherited from parent fs
mount_type: super::MountType,
) -> Result<(), Error> {
mount_ecryptfs(self.encrypted_dir.as_ref(), mountpoint, self.key.as_ref()).await
self.pre_mount().await?;
tokio::fs::create_dir_all(mountpoint.as_ref()).await?;
Command::new("mount")
.args(
default_mount_command(self, mountpoint, mount_type)
.await?
.get_args(),
)
.input(Some(&mut std::io::Cursor::new(b"\n")))
.invoke(crate::ErrorKind::Filesystem)
.await?;
Ok(())
}
async fn source_hash(
&self,

View File

@@ -1,33 +1,19 @@
use std::path::Path;
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use sha2::Sha256;
use super::{FileSystem, MountType, ReadOnly};
use crate::util::Invoke;
use crate::Error;
use super::FileSystem;
use crate::prelude::*;
pub struct EfiVarFs;
#[async_trait]
impl FileSystem for EfiVarFs {
async fn mount<P: AsRef<Path> + Send + Sync>(
&self,
mountpoint: P,
mount_type: MountType,
) -> Result<(), Error> {
tokio::fs::create_dir_all(mountpoint.as_ref()).await?;
let mut cmd = tokio::process::Command::new("mount");
cmd.arg("-t")
.arg("efivarfs")
.arg("efivarfs")
.arg(mountpoint.as_ref());
if mount_type == ReadOnly {
cmd.arg("-o").arg("ro");
}
cmd.invoke(crate::ErrorKind::Filesystem).await?;
Ok(())
fn mount_type(&self) -> Option<impl AsRef<str>> {
Some("efivarfs")
}
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
Ok(Some("efivarfs"))
}
async fn source_hash(
&self,

View File

@@ -1,6 +1,5 @@
use std::path::Path;
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use reqwest::Url;
@@ -32,9 +31,8 @@ impl HttpDirFS {
HttpDirFS { url }
}
}
#[async_trait]
impl FileSystem for HttpDirFS {
async fn mount<P: AsRef<Path> + Send + Sync>(
async fn mount<P: AsRef<Path> + Send>(
&self,
mountpoint: P,
_mount_type: MountType,

View File

@@ -0,0 +1,88 @@
use std::ffi::OsStr;
use std::fmt::Display;
use std::path::Path;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use tokio::process::Command;
use super::{FileSystem, MountType};
use crate::disk::mount::filesystem::default_mount_command;
use crate::prelude::*;
use crate::util::Invoke;
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct IdMapped<Fs: FileSystem> {
filesystem: Fs,
from_id: u32,
to_id: u32,
range: u32,
}
impl<Fs: FileSystem> IdMapped<Fs> {
pub fn new(filesystem: Fs, from_id: u32, to_id: u32, range: u32) -> Self {
Self {
filesystem,
from_id,
to_id,
range,
}
}
}
impl<Fs: FileSystem> FileSystem for IdMapped<Fs> {
fn mount_type(&self) -> Option<impl AsRef<str>> {
self.filesystem.mount_type()
}
fn extra_args(&self) -> impl IntoIterator<Item = impl AsRef<OsStr>> {
self.filesystem.extra_args()
}
fn mount_options(&self) -> impl IntoIterator<Item = impl Display> {
self.filesystem
.mount_options()
.into_iter()
.map(|a| Box::new(a) as Box<dyn Display>)
.chain(std::iter::once(Box::new(lazy_format!(
"X-mount.idmap=b:{}:{}:{}",
self.from_id,
self.to_id,
self.range,
)) as Box<dyn Display>))
}
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
self.filesystem.source().await
}
async fn pre_mount(&self) -> Result<(), Error> {
self.filesystem.pre_mount().await
}
async fn mount<P: AsRef<Path> + Send>(
&self,
mountpoint: P,
mount_type: MountType,
) -> Result<(), Error> {
self.pre_mount().await?;
tokio::fs::create_dir_all(mountpoint.as_ref()).await?;
Command::new("mount.next")
.args(
default_mount_command(self, mountpoint, mount_type)
.await?
.get_args(),
)
.invoke(ErrorKind::Filesystem)
.await?;
Ok(())
}
async fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> {
let mut sha = Sha256::new();
sha.update("IdMapped");
sha.update(self.filesystem.source_hash().await?);
sha.update(u32::to_be_bytes(self.from_id));
sha.update(u32::to_be_bytes(self.to_id));
sha.update(u32::to_be_bytes(self.range));
Ok(sha.finalize())
}
}

View File

@@ -1,28 +1,11 @@
use std::path::Path;
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use sha2::Sha256;
use super::{FileSystem, MountType, ReadOnly};
use crate::util::Invoke;
use crate::Error;
pub async fn mount_label(
label: &str,
mountpoint: impl AsRef<Path>,
mount_type: MountType,
) -> Result<(), Error> {
tokio::fs::create_dir_all(mountpoint.as_ref()).await?;
let mut cmd = tokio::process::Command::new("mount");
cmd.arg("-L").arg(label).arg(mountpoint.as_ref());
if mount_type == ReadOnly {
cmd.arg("-o").arg("ro");
}
cmd.invoke(crate::ErrorKind::Filesystem).await?;
Ok(())
}
use super::FileSystem;
use crate::prelude::*;
pub struct Label<S: AsRef<str>> {
label: S,
@@ -32,14 +15,12 @@ impl<S: AsRef<str>> Label<S> {
Label { label }
}
}
#[async_trait]
impl<S: AsRef<str> + Send + Sync> FileSystem for Label<S> {
async fn mount<P: AsRef<Path> + Send + Sync>(
&self,
mountpoint: P,
mount_type: MountType,
) -> Result<(), Error> {
mount_label(self.label.as_ref(), mountpoint, mount_type).await
fn extra_args(&self) -> impl IntoIterator<Item = impl AsRef<std::ffi::OsStr>> {
["-L", self.label.as_ref()]
}
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
Ok(None::<&Path>)
}
async fn source_hash(
&self,

View File

@@ -1,38 +1,15 @@
use std::fmt::Display;
use std::os::unix::ffi::OsStrExt;
use std::path::Path;
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use lazy_format::lazy_format;
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use super::{FileSystem, MountType, ReadOnly};
use crate::util::Invoke;
use crate::{Error, ResultExt};
pub async fn mount(
logicalname: impl AsRef<Path>,
offset: u64,
size: u64,
mountpoint: impl AsRef<Path>,
mount_type: MountType,
) -> Result<(), Error> {
tokio::fs::create_dir_all(mountpoint.as_ref()).await?;
let mut opts = format!("loop,offset={offset},sizelimit={size}");
if mount_type == ReadOnly {
opts += ",ro";
}
tokio::process::Command::new("mount")
.arg(logicalname.as_ref())
.arg(mountpoint.as_ref())
.arg("-o")
.arg(opts)
.invoke(crate::ErrorKind::Filesystem)
.await?;
Ok(())
}
use super::FileSystem;
use crate::prelude::*;
#[derive(Debug, Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
@@ -50,21 +27,18 @@ impl<LogicalName: AsRef<Path>> LoopDev<LogicalName> {
}
}
}
#[async_trait]
impl<LogicalName: AsRef<Path> + Send + Sync> FileSystem for LoopDev<LogicalName> {
async fn mount<P: AsRef<Path> + Send + Sync>(
&self,
mountpoint: P,
mount_type: MountType,
) -> Result<(), Error> {
mount(
self.logicalname.as_ref(),
self.offset,
self.size,
mountpoint,
mount_type,
)
.await
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
Ok(Some(
tokio::fs::canonicalize(self.logicalname.as_ref()).await?,
))
}
fn mount_options(&self) -> impl IntoIterator<Item = impl Display> {
[
Box::new("loop") as Box<dyn Display>,
Box::new(lazy_format!("offset={}", self.offset)),
Box::new(lazy_format!("sizelimit={}", self.size)),
]
}
async fn source_hash(
&self,

View File

@@ -1,11 +1,15 @@
use std::ffi::OsStr;
use std::fmt::{Display, Write};
use std::path::Path;
use async_trait::async_trait;
use digest::generic_array::GenericArray;
use digest::OutputSizeUser;
use futures::Future;
use sha2::Sha256;
use tokio::process::Command;
use crate::Error;
use crate::prelude::*;
use crate::util::Invoke;
pub mod bind;
pub mod block_dev;
@@ -13,8 +17,10 @@ pub mod cifs;
pub mod ecryptfs;
pub mod efivarfs;
pub mod httpdirfs;
pub mod idmapped;
pub mod label;
pub mod loop_dev;
pub mod overlayfs;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum MountType {
@@ -24,14 +30,78 @@ pub enum MountType {
pub use MountType::*;
#[async_trait]
pub trait FileSystem {
async fn mount<P: AsRef<Path> + Send + Sync>(
pub(self) async fn default_mount_command(
fs: &(impl FileSystem + ?Sized),
mountpoint: impl AsRef<Path> + Send,
mount_type: MountType,
) -> Result<std::process::Command, Error> {
let mut cmd = std::process::Command::new("mount");
if mount_type == ReadOnly {
cmd.arg("-r");
}
cmd.args(fs.extra_args());
if let Some(ty) = fs.mount_type() {
cmd.arg("-t").arg(ty.as_ref());
}
if let Some(options) = fs
.mount_options()
.into_iter()
.fold(None, |acc: Option<String>, x| match acc {
Some(mut s) => {
write!(s, ",{}", x).unwrap();
Some(s)
}
None => Some(x.to_string()),
})
{
cmd.arg("-o").arg(options);
}
if let Some(source) = fs.source().await? {
cmd.arg(source.as_ref());
}
cmd.arg(mountpoint.as_ref());
Ok(dbg!(cmd))
}
pub(self) async fn default_mount_impl(
fs: &(impl FileSystem + ?Sized),
mountpoint: impl AsRef<Path> + Send,
mount_type: MountType,
) -> Result<(), Error> {
fs.pre_mount().await?;
tokio::fs::create_dir_all(mountpoint.as_ref()).await?;
Command::from(default_mount_command(fs, mountpoint, mount_type).await?)
.invoke(ErrorKind::Filesystem)
.await?;
Ok(())
}
pub trait FileSystem: Send + Sync {
fn mount_type(&self) -> Option<impl AsRef<str>> {
None::<&str>
}
fn extra_args(&self) -> impl IntoIterator<Item = impl AsRef<OsStr>> {
[] as [&str; 0]
}
fn mount_options(&self) -> impl IntoIterator<Item = impl Display> {
[] as [&str; 0]
}
fn source(&self) -> impl Future<Output = Result<Option<impl AsRef<Path>>, Error>> + Send {
async { Ok(None::<&Path>) }
}
fn pre_mount(&self) -> impl Future<Output = Result<(), Error>> + Send {
async { Ok(()) }
}
fn mount<P: AsRef<Path> + Send>(
&self,
mountpoint: P,
mount_type: MountType,
) -> Result<(), Error>;
async fn source_hash(
) -> impl Future<Output = Result<(), Error>> + Send {
default_mount_impl(self, mountpoint, mount_type)
}
fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error>;
) -> impl Future<Output = Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error>>
+ Send;
}

View File

@@ -0,0 +1,153 @@
use std::fmt::Display;
use std::os::unix::ffi::OsStrExt;
use std::path::Path;
use digest::generic_array::GenericArray;
use digest::{Digest, OutputSizeUser};
use sha2::Sha256;
use crate::disk::mount::filesystem::{FileSystem, ReadOnly, ReadWrite};
use crate::disk::mount::guard::{GenericMountGuard, MountGuard, TmpMountGuard};
use crate::prelude::*;
use crate::util::io::TmpDir;
struct OverlayFs<P0: AsRef<Path>, P1: AsRef<Path>> {
lower: P0,
upper: P1,
}
impl<P0: AsRef<Path>, P1: AsRef<Path>> OverlayFs<P0, P1> {
pub fn new(lower: P0, upper: P1) -> Self {
Self { lower, upper }
}
}
impl<P0: AsRef<Path> + Send + Sync, P1: AsRef<Path> + Send + Sync> FileSystem
for OverlayFs<P0, P1>
{
fn mount_type(&self) -> Option<impl AsRef<str>> {
Some("overlay")
}
async fn source(&self) -> Result<Option<impl AsRef<Path>>, Error> {
Ok(Some("overlay"))
}
fn mount_options(&self) -> impl IntoIterator<Item = impl Display> {
[
Box::new(lazy_format!("lowerdir={}", self.lower.as_ref().display()))
as Box<dyn Display>,
Box::new(lazy_format!(
"upperdir={}/upper",
self.upper.as_ref().display()
)),
Box::new(lazy_format!(
"workdir={}/work",
self.upper.as_ref().display()
)),
]
}
async fn pre_mount(&self) -> Result<(), Error> {
tokio::fs::create_dir_all(self.upper.as_ref().join("upper")).await?;
tokio::fs::create_dir_all(self.upper.as_ref().join("work")).await?;
Ok(())
}
async fn source_hash(
&self,
) -> Result<GenericArray<u8, <Sha256 as OutputSizeUser>::OutputSize>, Error> {
let mut sha = Sha256::new();
sha.update("OverlayFs");
sha.update(
tokio::fs::canonicalize(self.lower.as_ref())
.await
.with_ctx(|_| {
(
crate::ErrorKind::Filesystem,
self.lower.as_ref().display().to_string(),
)
})?
.as_os_str()
.as_bytes(),
);
sha.update(
tokio::fs::canonicalize(self.upper.as_ref())
.await
.with_ctx(|_| {
(
crate::ErrorKind::Filesystem,
self.upper.as_ref().display().to_string(),
)
})?
.as_os_str()
.as_bytes(),
);
Ok(sha.finalize())
}
}
#[derive(Debug)]
pub struct OverlayGuard {
lower: Option<TmpMountGuard>,
upper: Option<TmpDir>,
inner_guard: MountGuard,
}
impl OverlayGuard {
pub async fn mount(
base: &impl FileSystem,
mountpoint: impl AsRef<Path>,
) -> Result<Self, Error> {
let lower = TmpMountGuard::mount(base, ReadOnly).await?;
let upper = TmpDir::new().await?;
let inner_guard = MountGuard::mount(
&OverlayFs::new(lower.path(), upper.as_ref()),
mountpoint,
ReadWrite,
)
.await?;
Ok(Self {
lower: Some(lower),
upper: Some(upper),
inner_guard,
})
}
pub async fn unmount(mut self, delete_mountpoint: bool) -> Result<(), Error> {
self.inner_guard.take().unmount(delete_mountpoint).await?;
if let Some(lower) = self.lower.take() {
lower.unmount().await?;
}
if let Some(upper) = self.upper.take() {
upper.delete().await?;
}
Ok(())
}
pub fn take(&mut self) -> Self {
Self {
lower: self.lower.take(),
upper: self.upper.take(),
inner_guard: self.inner_guard.take(),
}
}
}
#[async_trait::async_trait]
impl GenericMountGuard for OverlayGuard {
fn path(&self) -> &Path {
self.inner_guard.path()
}
async fn unmount(mut self) -> Result<(), Error> {
self.unmount(false).await
}
}
impl Drop for OverlayGuard {
fn drop(&mut self) {
let lower = self.lower.take();
let upper = self.upper.take();
let guard = self.inner_guard.take();
if lower.is_some() || upper.is_some() || guard.mounted {
tokio::spawn(async move {
guard.unmount(false).await.unwrap();
if let Some(lower) = lower {
lower.unmount().await.unwrap();
}
if let Some(upper) = upper {
upper.delete().await.unwrap();
}
});
}
}
}

View File

@@ -9,20 +9,47 @@ use tracing::instrument;
use super::filesystem::{FileSystem, MountType, ReadOnly, ReadWrite};
use super::util::unmount;
use crate::util::Invoke;
use crate::util::{Invoke, Never};
use crate::Error;
pub const TMP_MOUNTPOINT: &'static str = "/media/embassy/tmp";
#[async_trait::async_trait]
pub trait GenericMountGuard: AsRef<Path> + std::fmt::Debug + Send + Sync + 'static {
pub trait GenericMountGuard: std::fmt::Debug + Send + Sync + 'static {
fn path(&self) -> &Path;
async fn unmount(mut self) -> Result<(), Error>;
}
#[async_trait::async_trait]
impl GenericMountGuard for Never {
fn path(&self) -> &Path {
match *self {}
}
async fn unmount(mut self) -> Result<(), Error> {
match self {}
}
}
#[async_trait::async_trait]
impl<T> GenericMountGuard for Arc<T>
where
T: GenericMountGuard,
{
fn path(&self) -> &Path {
(&**self).path()
}
async fn unmount(mut self) -> Result<(), Error> {
if let Ok(guard) = Arc::try_unwrap(self) {
guard.unmount().await?;
}
Ok(())
}
}
#[derive(Debug)]
pub struct MountGuard {
mountpoint: PathBuf,
mounted: bool,
pub(super) mounted: bool,
}
impl MountGuard {
pub async fn mount(
@@ -37,6 +64,16 @@ impl MountGuard {
mounted: true,
})
}
fn as_unmounted(&self) -> Self {
Self {
mountpoint: self.mountpoint.clone(),
mounted: false,
}
}
pub fn take(&mut self) -> Self {
let unmounted = self.as_unmounted();
std::mem::replace(self, unmounted)
}
pub async fn unmount(mut self, delete_mountpoint: bool) -> Result<(), Error> {
if self.mounted {
unmount(&self.mountpoint).await?;
@@ -57,11 +94,6 @@ impl MountGuard {
Ok(())
}
}
impl AsRef<Path> for MountGuard {
fn as_ref(&self) -> &Path {
&self.mountpoint
}
}
impl Drop for MountGuard {
fn drop(&mut self) {
if self.mounted {
@@ -72,6 +104,9 @@ impl Drop for MountGuard {
}
#[async_trait::async_trait]
impl GenericMountGuard for MountGuard {
fn path(&self) -> &Path {
&self.mountpoint
}
async fn unmount(mut self) -> Result<(), Error> {
MountGuard::unmount(self, false).await
}
@@ -89,7 +124,7 @@ lazy_static! {
Mutex::new(BTreeMap::new());
}
#[derive(Debug)]
#[derive(Debug, Clone)]
pub struct TmpMountGuard {
guard: Arc<MountGuard>,
}
@@ -122,21 +157,42 @@ impl TmpMountGuard {
Ok(TmpMountGuard { guard })
}
}
pub async fn unmount(self) -> Result<(), Error> {
if let Ok(guard) = Arc::try_unwrap(self.guard) {
guard.unmount(true).await?;
}
Ok(())
}
}
impl AsRef<Path> for TmpMountGuard {
fn as_ref(&self) -> &Path {
(&*self.guard).as_ref()
pub fn take(&mut self) -> Self {
let unmounted = Self {
guard: Arc::new(self.guard.as_unmounted()),
};
std::mem::replace(self, unmounted)
}
}
#[async_trait::async_trait]
impl GenericMountGuard for TmpMountGuard {
fn path(&self) -> &Path {
self.guard.path()
}
async fn unmount(mut self) -> Result<(), Error> {
TmpMountGuard::unmount(self).await
self.guard.unmount().await
}
}
#[derive(Debug)]
pub struct SubPath<G: GenericMountGuard> {
guard: G,
path: PathBuf,
}
impl<G: GenericMountGuard> SubPath<G> {
pub fn new(guard: G, path: impl AsRef<Path>) -> Self {
let path = path.as_ref();
let path = guard.path().join(path.strip_prefix("/").unwrap_or(path));
Self { guard, path }
}
}
#[async_trait::async_trait]
impl<G: GenericMountGuard> GenericMountGuard for SubPath<G> {
fn path(&self) -> &Path {
self.path.as_path()
}
async fn unmount(mut self) -> Result<(), Error> {
self.guard.unmount().await
}
}

View File

@@ -44,7 +44,7 @@ pub async fn bind<P0: AsRef<Path>, P1: AsRef<Path>>(
pub async fn unmount<P: AsRef<Path>>(mountpoint: P) -> Result<(), Error> {
tracing::debug!("Unmounting {}.", mountpoint.as_ref().display());
tokio::process::Command::new("umount")
.arg("-l")
.arg("-Rl")
.arg(mountpoint.as_ref())
.invoke(crate::ErrorKind::Filesystem)
.await?;

View File

@@ -17,6 +17,7 @@ use tracing::instrument;
use super::mount::filesystem::block_dev::BlockDev;
use super::mount::filesystem::ReadOnly;
use super::mount::guard::TmpMountGuard;
use crate::disk::mount::guard::GenericMountGuard;
use crate::disk::OsPartitionInfo;
use crate::util::serde::IoFormat;
use crate::util::{Invoke, Version};
@@ -403,13 +404,13 @@ async fn part_info(part: PathBuf) -> PartitionInfo {
match TmpMountGuard::mount(&BlockDev::new(&part), ReadOnly).await {
Err(e) => tracing::warn!("Could not collect usage information: {}", e.source),
Ok(mount_guard) => {
used = get_used(&mount_guard)
used = get_used(mount_guard.path())
.await
.map_err(|e| {
tracing::warn!("Could not get usage of {}: {}", part.display(), e.source)
})
.ok();
if let Some(recovery_info) = match recovery_info(&mount_guard).await {
if let Some(recovery_info) = match recovery_info(mount_guard.path()).await {
Ok(a) => a,
Err(e) => {
tracing::error!("Error fetching unencrypted backup metadata: {}", e);

View File

@@ -1,4 +1,3 @@
use color_eyre::eyre::eyre;
pub use models::{Error, ErrorKind, OptionExt, ResultExt};
#[derive(Debug, Default)]
@@ -18,11 +17,15 @@ impl ErrorCollection {
}
}
pub fn into_result(self) -> Result<(), Error> {
if self.0.is_empty() {
Ok(())
pub fn into_result(mut self) -> Result<(), Error> {
if self.0.len() <= 1 {
if let Some(err) = self.0.pop() {
Err(err)
} else {
Ok(())
}
} else {
Err(Error::new(eyre!("{}", self), ErrorKind::MultipleErrors))
Err(Error::new(self, ErrorKind::MultipleErrors))
}
}
}
@@ -49,6 +52,7 @@ impl std::fmt::Display for ErrorCollection {
Ok(())
}
}
impl std::error::Error for ErrorCollection {}
#[macro_export]
macro_rules! ensure_code {

View File

@@ -2,8 +2,7 @@ use std::collections::BTreeSet;
use std::path::Path;
use async_compression::tokio::bufread::GzipDecoder;
use clap::ArgMatches;
use rpc_toolkit::command;
use clap::Parser;
use serde::{Deserialize, Serialize};
use tokio::fs::File;
use tokio::io::BufReader;
@@ -43,8 +42,8 @@ pub struct Firmware {
shasum: String,
}
fn display_firmware_update_result(arg: RequiresReboot, _: &ArgMatches) {
if arg.0 {
pub fn display_firmware_update_result(result: RequiresReboot) {
if result.0 {
println!("Firmware successfully updated! Reboot to apply changes.");
} else {
println!("No firmware update available.");
@@ -55,7 +54,7 @@ fn display_firmware_update_result(arg: RequiresReboot, _: &ArgMatches) {
/// that the firmware was the correct and updated for
/// systems like the Pure System that a new firmware
/// was released and the updates where pushed through the pure os.
#[command(rename = "update-firmware", display(display_firmware_update_result))]
// #[command(rename = "update-firmware", display(display_firmware_update_result))]
pub async fn update_firmware() -> Result<RequiresReboot, Error> {
let system_product_name = String::from_utf8(
Command::new("dmidecode")

View File

@@ -4,7 +4,6 @@ use std::path::Path;
use std::time::{Duration, SystemTime};
use color_eyre::eyre::eyre;
use models::ResultExt;
use rand::random;
use sqlx::{Pool, Postgres};
@@ -12,17 +11,12 @@ use tokio::process::Command;
use tracing::instrument;
use crate::account::AccountInfo;
use crate::context::rpc::RpcContextConfig;
use crate::context::config::ServerConfig;
use crate::db::model::ServerStatus;
use crate::disk::mount::util::unmount;
use crate::install::PKG_ARCHIVE_DIR;
use crate::middleware::auth::LOCAL_AUTH_COOKIE_PATH;
use crate::prelude::*;
use crate::util::cpupower::{
get_available_governors, get_preferred_governor, set_governor,
};
use crate::util::docker::{create_bridge_network, CONTAINER_DATADIR, CONTAINER_TOOL};
use crate::util::cpupower::{get_available_governors, get_preferred_governor, set_governor};
use crate::util::Invoke;
use crate::{Error, ARCH};
@@ -190,7 +184,7 @@ pub struct InitResult {
}
#[instrument(skip_all)]
pub async fn init(cfg: &RpcContextConfig) -> Result<InitResult, Error> {
pub async fn init(cfg: &ServerConfig) -> Result<InitResult, Error> {
tokio::fs::create_dir_all("/run/embassy")
.await
.with_ctx(|_| (crate::ErrorKind::Filesystem, "mkdir -p /run/embassy"))?;
@@ -292,77 +286,6 @@ pub async fn init(cfg: &RpcContextConfig) -> Result<InitResult, Error> {
tokio::fs::remove_dir_all(&tmp_var).await?;
}
crate::disk::mount::util::bind(&tmp_var, "/var/tmp", false).await?;
let tmp_docker = cfg
.datadir()
.join(format!("package-data/tmp/{CONTAINER_TOOL}"));
let tmp_docker_exists = tokio::fs::metadata(&tmp_docker).await.is_ok();
if CONTAINER_TOOL == "docker" {
Command::new("systemctl")
.arg("stop")
.arg("docker")
.invoke(crate::ErrorKind::Docker)
.await?;
}
crate::disk::mount::util::bind(&tmp_docker, CONTAINER_DATADIR, false).await?;
if CONTAINER_TOOL == "docker" {
Command::new("systemctl")
.arg("reset-failed")
.arg("docker")
.invoke(crate::ErrorKind::Docker)
.await?;
Command::new("systemctl")
.arg("start")
.arg("docker")
.invoke(crate::ErrorKind::Docker)
.await?;
}
tracing::info!("Mounted Docker Data");
if should_rebuild || !tmp_docker_exists {
if CONTAINER_TOOL == "docker" {
tracing::info!("Creating Docker Network");
create_bridge_network("start9", "172.18.0.1/24", "br-start9").await?;
tracing::info!("Created Docker Network");
}
let datadir = cfg.datadir();
tracing::info!("Loading System Docker Images");
crate::install::rebuild_from("/usr/lib/startos/system-images", &datadir).await?;
tracing::info!("Loaded System Docker Images");
tracing::info!("Loading Package Docker Images");
crate::install::rebuild_from(datadir.join(PKG_ARCHIVE_DIR), &datadir).await?;
tracing::info!("Loaded Package Docker Images");
}
if CONTAINER_TOOL == "podman" {
crate::util::docker::remove_container("netdummy", true).await?;
Command::new("podman")
.arg("run")
.arg("-d")
.arg("--rm")
.arg("--init")
.arg("--network=start9")
.arg("--name=netdummy")
.arg("start9/x_system/utils:latest")
.arg("sleep")
.arg("infinity")
.invoke(crate::ErrorKind::Docker)
.await?;
}
tracing::info!("Enabling Docker QEMU Emulation");
Command::new(CONTAINER_TOOL)
.arg("run")
.arg("--privileged")
.arg("--rm")
.arg("start9/x_system/binfmt")
.arg("--install")
.arg("all")
.invoke(crate::ErrorKind::Docker)
.await?;
tracing::info!("Enabled Docker QEMU Emulation");
let governor = if let Some(governor) = &server_info.governor {
if get_available_governors().await?.contains(governor) {

View File

@@ -1,20 +1,36 @@
use std::path::PathBuf;
use rpc_toolkit::command;
use clap::Parser;
use rpc_toolkit::{command, from_fn_async, AnyContext, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use crate::context::CliContext;
use crate::s9pk::manifest::Manifest;
use crate::s9pk::reader::S9pkReader;
use crate::util::display_none;
use crate::util::serde::{display_serializable, IoFormat};
// use crate::s9pk::reader::S9pkReader;
use crate::util::serde::HandlerExtSerde;
use crate::Error;
#[command(subcommands(hash, manifest, license, icon, instructions, docker_images))]
pub fn inspect() -> Result<(), Error> {
Ok(())
pub fn inspect() -> ParentHandler {
ParentHandler::new()
.subcommand("hash", from_fn_async(hash))
.subcommand(
"manifest",
from_fn_async(manifest).with_display_serializable(),
)
.subcommand("license", from_fn_async(license).no_display())
.subcommand("icon", from_fn_async(icon).no_display())
.subcommand("instructions", from_fn_async(instructions).no_display())
.subcommand("docker-images", from_fn_async(docker_images).no_display())
}
#[command(cli_only)]
pub async fn hash(#[arg] path: PathBuf) -> Result<String, Error> {
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct HashParams {
path: PathBuf,
}
pub async fn hash(_: CliContext, HashParams { path }: HashParams) -> Result<String, Error> {
Ok(S9pkReader::open(path, true)
.await?
.hash_str()
@@ -22,21 +38,36 @@ pub async fn hash(#[arg] path: PathBuf) -> Result<String, Error> {
.to_owned())
}
#[command(cli_only, display(display_serializable))]
pub async fn manifest(
#[arg] path: PathBuf,
#[arg(rename = "no-verify", long = "no-verify")] no_verify: bool,
#[allow(unused_variables)]
#[arg(long = "format")]
format: Option<IoFormat>,
) -> Result<Manifest, Error> {
S9pkReader::open(path, !no_verify).await?.manifest().await
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct ManifestParams {
path: PathBuf,
#[arg(name = "no-verify", long = "no-verify")]
no_verify: bool,
}
// #[command(cli_only, display(display_serializable))]
pub async fn manifest(
_: CliContext,
ManifestParams { .. }: ManifestParams,
) -> Result<Manifest, Error> {
// S9pkReader::open(path, !no_verify).await?.manifest().await
todo!()
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct InspectParams {
path: PathBuf,
#[arg(name = "no-verify", long = "no-verify")]
no_verify: bool,
}
#[command(cli_only, display(display_none))]
pub async fn license(
#[arg] path: PathBuf,
#[arg(rename = "no-verify", long = "no-verify")] no_verify: bool,
_: AnyContext,
InspectParams { path, no_verify }: InspectParams,
) -> Result<(), Error> {
tokio::io::copy(
&mut S9pkReader::open(path, !no_verify).await?.license().await?,
@@ -46,10 +77,9 @@ pub async fn license(
Ok(())
}
#[command(cli_only, display(display_none))]
pub async fn icon(
#[arg] path: PathBuf,
#[arg(rename = "no-verify", long = "no-verify")] no_verify: bool,
_: AnyContext,
InspectParams { path, no_verify }: InspectParams,
) -> Result<(), Error> {
tokio::io::copy(
&mut S9pkReader::open(path, !no_verify).await?.icon().await?,
@@ -58,11 +88,18 @@ pub async fn icon(
.await?;
Ok(())
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct InstructionParams {
path: PathBuf,
#[arg(name = "no-verify", long = "no-verify")]
no_verify: bool,
}
#[command(cli_only, display(display_none))]
pub async fn instructions(
#[arg] path: PathBuf,
#[arg(rename = "no-verify", long = "no-verify")] no_verify: bool,
_: CliContext,
InstructionParams { path, no_verify }: InstructionParams,
) -> Result<(), Error> {
tokio::io::copy(
&mut S9pkReader::open(path, !no_verify)
@@ -74,11 +111,9 @@ pub async fn instructions(
.await?;
Ok(())
}
#[command(cli_only, display(display_none), rename = "docker-images")]
pub async fn docker_images(
#[arg] path: PathBuf,
#[arg(rename = "no-verify", long = "no-verify")] no_verify: bool,
_: AnyContext,
InspectParams { path, no_verify }: InspectParams,
) -> Result<(), Error> {
tokio::io::copy(
&mut S9pkReader::open(path, !no_verify)

View File

@@ -1,241 +0,0 @@
use std::path::PathBuf;
use std::sync::Arc;
use models::OptionExt;
use sqlx::{Executor, Postgres};
use tracing::instrument;
use super::PKG_ARCHIVE_DIR;
use crate::context::RpcContext;
use crate::db::model::{
CurrentDependencies, Database, PackageDataEntry, PackageDataEntryInstalled,
PackageDataEntryMatchModelRef,
};
use crate::error::ErrorCollection;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::util::{Apply, Version};
use crate::volume::{asset_dir, script_dir};
use crate::Error;
#[instrument(skip_all)]
pub async fn cleanup(ctx: &RpcContext, id: &PackageId, version: &Version) -> Result<(), Error> {
let mut errors = ErrorCollection::new();
ctx.managers.remove(&(id.clone(), version.clone())).await;
// docker images start9/$APP_ID/*:$VERSION -q | xargs docker rmi
let images = crate::util::docker::images_for(id, version).await?;
errors.extend(
futures::future::join_all(images.into_iter().map(|sha| async {
let sha = sha; // move into future
crate::util::docker::remove_image(&sha).await
}))
.await,
);
let pkg_archive_dir = ctx
.datadir
.join(PKG_ARCHIVE_DIR)
.join(id)
.join(version.as_str());
if tokio::fs::metadata(&pkg_archive_dir).await.is_ok() {
tokio::fs::remove_dir_all(&pkg_archive_dir)
.await
.apply(|res| errors.handle(res));
}
let assets_path = asset_dir(&ctx.datadir, id, version);
if tokio::fs::metadata(&assets_path).await.is_ok() {
tokio::fs::remove_dir_all(&assets_path)
.await
.apply(|res| errors.handle(res));
}
let scripts_path = script_dir(&ctx.datadir, id, version);
if tokio::fs::metadata(&scripts_path).await.is_ok() {
tokio::fs::remove_dir_all(&scripts_path)
.await
.apply(|res| errors.handle(res));
}
errors.into_result()
}
#[instrument(skip_all)]
pub async fn cleanup_failed(ctx: &RpcContext, id: &PackageId) -> Result<(), Error> {
if let Some(version) = match ctx
.db
.peek()
.await
.as_package_data()
.as_idx(id)
.or_not_found(id)?
.as_match()
{
PackageDataEntryMatchModelRef::Installing(m) => Some(m.as_manifest().as_version().de()?),
PackageDataEntryMatchModelRef::Restoring(m) => Some(m.as_manifest().as_version().de()?),
PackageDataEntryMatchModelRef::Updating(m) => {
let manifest_version = m.as_manifest().as_version().de()?;
let installed = m.as_installed().as_manifest().as_version().de()?;
if manifest_version != installed {
Some(manifest_version)
} else {
None // do not remove existing data
}
}
_ => {
tracing::warn!("{}: Nothing to clean up!", id);
None
}
} {
cleanup(ctx, id, &version).await?;
}
ctx.db
.mutate(|v| {
match v
.clone()
.into_package_data()
.into_idx(id)
.or_not_found(id)?
.as_match()
{
PackageDataEntryMatchModelRef::Installing(_)
| PackageDataEntryMatchModelRef::Restoring(_) => {
v.as_package_data_mut().remove(id)?;
}
PackageDataEntryMatchModelRef::Updating(pde) => {
v.as_package_data_mut()
.as_idx_mut(id)
.or_not_found(id)?
.ser(&PackageDataEntry::Installed(PackageDataEntryInstalled {
manifest: pde.as_installed().as_manifest().de()?,
static_files: pde.as_static_files().de()?,
installed: pde.as_installed().de()?,
}))?;
}
_ => (),
}
Ok(())
})
.await
}
#[instrument(skip_all)]
pub fn remove_from_current_dependents_lists(
db: &mut Model<Database>,
id: &PackageId,
current_dependencies: &CurrentDependencies,
) -> Result<(), Error> {
for dep in current_dependencies.0.keys().chain(std::iter::once(id)) {
if let Some(current_dependents) = db
.as_package_data_mut()
.as_idx_mut(dep)
.and_then(|d| d.as_installed_mut())
.map(|i| i.as_current_dependents_mut())
{
current_dependents.remove(id)?;
}
}
Ok(())
}
#[instrument(skip_all)]
pub async fn uninstall<Ex>(ctx: &RpcContext, secrets: &mut Ex, id: &PackageId) -> Result<(), Error>
where
for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{
let db = ctx.db.peek().await;
let entry = db
.as_package_data()
.as_idx(id)
.or_not_found(id)?
.expect_as_removing()?;
let dependents_paths: Vec<PathBuf> = entry
.as_removing()
.as_current_dependents()
.keys()?
.into_iter()
.filter(|x| x != id)
.flat_map(|x| db.as_package_data().as_idx(&x))
.flat_map(|x| x.as_installed())
.flat_map(|x| x.as_manifest().as_volumes().de())
.flat_map(|x| x.values().cloned().collect::<Vec<_>>())
.flat_map(|x| x.pointer_path(&ctx.datadir))
.collect();
let volume_dir = ctx
.datadir
.join(crate::volume::PKG_VOLUME_DIR)
.join(&*entry.as_manifest().as_id().de()?);
let version = entry.as_removing().as_manifest().as_version().de()?;
tracing::debug!(
"Cleaning up {:?} except for {:?}",
volume_dir,
dependents_paths
);
cleanup(ctx, id, &version).await?;
cleanup_folder(volume_dir, Arc::new(dependents_paths)).await;
remove_network_keys(secrets, id).await?;
ctx.db
.mutate(|d| {
d.as_package_data_mut().remove(id)?;
remove_from_current_dependents_lists(
d,
id,
&entry.as_removing().as_current_dependencies().de()?,
)
})
.await
}
#[instrument(skip_all)]
pub async fn remove_network_keys<Ex>(secrets: &mut Ex, id: &PackageId) -> Result<(), Error>
where
for<'a> &'a mut Ex: Executor<'a, Database = Postgres>,
{
sqlx::query!("DELETE FROM network_keys WHERE package = $1", &*id)
.execute(&mut *secrets)
.await?;
sqlx::query!("DELETE FROM tor WHERE package = $1", &*id)
.execute(&mut *secrets)
.await?;
Ok(())
}
/// Needed to remove, without removing the folders that are mounted in the other docker containers
pub fn cleanup_folder(
path: PathBuf,
dependents_volumes: Arc<Vec<PathBuf>>,
) -> futures::future::BoxFuture<'static, ()> {
Box::pin(async move {
let meta_data = match tokio::fs::metadata(&path).await {
Ok(a) => a,
Err(_e) => {
return;
}
};
if !meta_data.is_dir() {
tracing::error!("is_not dir, remove {:?}", path);
let _ = tokio::fs::remove_file(&path).await;
return;
}
if !dependents_volumes
.iter()
.any(|v| v.starts_with(&path) || v == &path)
{
tracing::error!("No parents, remove {:?}", path);
let _ = tokio::fs::remove_dir_all(&path).await;
return;
}
let mut read_dir = match tokio::fs::read_dir(&path).await {
Ok(a) => a,
Err(_e) => {
return;
}
};
tracing::error!("Parents, recurse {:?}", path);
while let Some(entry) = read_dir.next_entry().await.ok().flatten() {
let entry_path = entry.path();
cleanup_folder(entry_path, dependents_volumes.clone()).await;
}
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,228 +0,0 @@
use std::future::Future;
use std::io::SeekFrom;
use std::pin::Pin;
use std::sync::atomic::{AtomicBool, AtomicU64, Ordering};
use std::sync::Arc;
use std::task::{Context, Poll};
use std::time::Duration;
use models::{OptionExt, PackageId};
use serde::{Deserialize, Serialize};
use tokio::io::{AsyncRead, AsyncSeek, AsyncWrite};
use crate::db::model::Database;
use crate::prelude::*;
#[derive(Debug, Deserialize, Serialize, HasModel, Default)]
#[serde(rename_all = "kebab-case")]
#[model = "Model<Self>"]
pub struct InstallProgress {
pub size: Option<u64>,
pub downloaded: AtomicU64,
pub download_complete: AtomicBool,
pub validated: AtomicU64,
pub validation_complete: AtomicBool,
pub unpacked: AtomicU64,
pub unpack_complete: AtomicBool,
}
impl InstallProgress {
pub fn new(size: Option<u64>) -> Self {
InstallProgress {
size,
downloaded: AtomicU64::new(0),
download_complete: AtomicBool::new(false),
validated: AtomicU64::new(0),
validation_complete: AtomicBool::new(false),
unpacked: AtomicU64::new(0),
unpack_complete: AtomicBool::new(false),
}
}
pub fn download_complete(&self) {
self.download_complete.store(true, Ordering::SeqCst)
}
pub async fn track_download(self: Arc<Self>, db: PatchDb, id: PackageId) -> Result<(), Error> {
let update = |d: &mut Model<Database>| {
d.as_package_data_mut()
.as_idx_mut(&id)
.or_not_found(&id)?
.as_install_progress_mut()
.or_not_found("install-progress")?
.ser(&self)
};
while !self.download_complete.load(Ordering::SeqCst) {
db.mutate(&update).await?;
tokio::time::sleep(Duration::from_millis(300)).await;
}
db.mutate(&update).await
}
pub async fn track_download_during<
F: FnOnce() -> Fut,
Fut: Future<Output = Result<T, Error>>,
T,
>(
self: &Arc<Self>,
db: PatchDb,
id: &PackageId,
f: F,
) -> Result<T, Error> {
let tracker = tokio::spawn(self.clone().track_download(db.clone(), id.clone()));
let res = f().await;
self.download_complete.store(true, Ordering::SeqCst);
tracker.await.unwrap()?;
res
}
pub async fn track_read(
self: Arc<Self>,
db: PatchDb,
id: PackageId,
complete: Arc<AtomicBool>,
) -> Result<(), Error> {
let update = |d: &mut Model<Database>| {
d.as_package_data_mut()
.as_idx_mut(&id)
.or_not_found(&id)?
.as_install_progress_mut()
.or_not_found("install-progress")?
.ser(&self)
};
while !complete.load(Ordering::SeqCst) {
db.mutate(&update).await?;
tokio::time::sleep(Duration::from_millis(300)).await;
}
db.mutate(&update).await
}
pub async fn track_read_during<
F: FnOnce() -> Fut,
Fut: Future<Output = Result<T, Error>>,
T,
>(
self: &Arc<Self>,
db: PatchDb,
id: &PackageId,
f: F,
) -> Result<T, Error> {
let complete = Arc::new(AtomicBool::new(false));
let tracker = tokio::spawn(self.clone().track_read(
db.clone(),
id.clone(),
complete.clone(),
));
let res = f().await;
complete.store(true, Ordering::SeqCst);
tracker.await.unwrap()?;
res
}
}
#[pin_project::pin_project]
#[derive(Debug)]
pub struct InstallProgressTracker<RW> {
#[pin]
inner: RW,
validating: bool,
progress: Arc<InstallProgress>,
}
impl<RW> InstallProgressTracker<RW> {
pub fn new(inner: RW, progress: Arc<InstallProgress>) -> Self {
InstallProgressTracker {
inner,
validating: true,
progress,
}
}
pub fn validated(&mut self) {
self.progress
.validation_complete
.store(true, Ordering::SeqCst);
self.validating = false;
}
}
impl<W: AsyncWrite> AsyncWrite for InstallProgressTracker<W> {
fn poll_write(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &[u8],
) -> Poll<Result<usize, std::io::Error>> {
let this = self.project();
match this.inner.poll_write(cx, buf) {
Poll::Ready(Ok(n)) => {
this.progress
.downloaded
.fetch_add(n as u64, Ordering::SeqCst);
Poll::Ready(Ok(n))
}
a => a,
}
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Result<(), std::io::Error>> {
let this = self.project();
this.inner.poll_flush(cx)
}
fn poll_shutdown(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<(), std::io::Error>> {
let this = self.project();
this.inner.poll_shutdown(cx)
}
fn poll_write_vectored(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
bufs: &[std::io::IoSlice<'_>],
) -> Poll<Result<usize, std::io::Error>> {
let this = self.project();
match this.inner.poll_write_vectored(cx, bufs) {
Poll::Ready(Ok(n)) => {
this.progress
.downloaded
.fetch_add(n as u64, Ordering::SeqCst);
Poll::Ready(Ok(n))
}
a => a,
}
}
}
impl<R: AsyncRead> AsyncRead for InstallProgressTracker<R> {
fn poll_read(
self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> Poll<std::io::Result<()>> {
let this = self.project();
let prev = buf.filled().len() as u64;
match this.inner.poll_read(cx, buf) {
Poll::Ready(Ok(())) => {
if *this.validating {
&this.progress.validated
} else {
&this.progress.unpacked
}
.fetch_add(buf.filled().len() as u64 - prev, Ordering::SeqCst);
Poll::Ready(Ok(()))
}
a => a,
}
}
}
impl<R: AsyncSeek> AsyncSeek for InstallProgressTracker<R> {
fn start_seek(self: Pin<&mut Self>, position: SeekFrom) -> std::io::Result<()> {
let this = self.project();
this.inner.start_seek(position)
}
fn poll_complete(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<std::io::Result<u64>> {
let this = self.project();
match this.inner.poll_complete(cx) {
Poll::Ready(Ok(n)) => {
if *this.validating {
&this.progress.validated
} else {
&this.progress.unpacked
}
.store(n, Ordering::SeqCst);
Poll::Ready(Ok(n))
}
a => a,
}
}
}

View File

@@ -1,5 +1,6 @@
use std::collections::BTreeMap;
use models::PackageId;
use rpc_toolkit::command;
use tracing::instrument;
@@ -7,7 +8,6 @@ use crate::config::not_found;
use crate::context::RpcContext;
use crate::db::model::CurrentDependents;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::util::serde::display_serializable;
use crate::util::Version;
use crate::Error;

View File

@@ -38,20 +38,20 @@ pub mod error;
pub mod firmware;
pub mod hostname;
pub mod init;
pub mod inspect;
pub mod progress;
// pub mod inspect;
pub mod install;
pub mod logs;
pub mod manager;
pub mod lxc;
pub mod middleware;
pub mod migration;
pub mod net;
pub mod notifications;
pub mod os_install;
pub mod prelude;
pub mod procedure;
pub mod properties;
pub mod registry;
pub mod s9pk;
pub mod service;
pub mod setup;
pub mod shutdown;
pub mod sound;
@@ -59,100 +59,217 @@ pub mod ssh;
pub mod status;
pub mod system;
pub mod update;
pub mod upload;
pub mod util;
pub mod version;
pub mod volume;
use std::time::SystemTime;
use clap::Parser;
pub use config::Config;
pub use error::{Error, ErrorKind, ResultExt};
use rpc_toolkit::command;
use imbl_value::Value;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{
command, from_fn, from_fn_async, from_fn_blocking, AnyContext, HandlerExt, ParentHandler,
};
use serde::{Deserialize, Serialize};
#[command(metadata(authenticated = false))]
pub fn echo(#[arg] message: String) -> Result<String, RpcError> {
use crate::context::CliContext;
use crate::util::serde::HandlerExtSerde;
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct EchoParams {
message: String,
}
pub fn echo(_: AnyContext, EchoParams { message }: EchoParams) -> Result<String, RpcError> {
Ok(message)
}
#[command(subcommands(
version::git_info,
echo,
inspect::inspect,
server,
package,
net::net,
auth::auth,
db::db,
ssh::ssh,
net::wifi::wifi,
disk::disk,
notifications::notification,
backup::backup,
registry::marketplace::marketplace,
))]
pub fn main_api() -> Result<(), RpcError> {
Ok(())
pub fn main_api() -> ParentHandler {
ParentHandler::new()
.subcommand("git-info", from_fn(version::git_info))
.subcommand(
"echo",
from_fn(echo)
.with_metadata("authenticated", Value::Bool(false))
.with_remote_cli::<CliContext>(),
)
.subcommand("init", from_fn_blocking(developer::init).no_display())
.subcommand("server", server())
.subcommand("package", package())
.subcommand("net", net::net())
.subcommand("auth", auth::auth())
.subcommand("db", db::db())
.subcommand("ssh", ssh::ssh())
.subcommand("wifi", net::wifi::wifi())
.subcommand("disk", disk::disk())
.subcommand("notification", notifications::notification())
.subcommand("backup", backup::backup())
.subcommand("marketplace", registry::marketplace::marketplace())
.subcommand("lxc", lxc::lxc())
.subcommand("s9pk", s9pk::rpc::s9pk())
}
#[command(subcommands(
system::time,
system::experimental,
system::logs,
system::kernel_logs,
system::metrics,
shutdown::shutdown,
shutdown::restart,
shutdown::rebuild,
update::update_system,
firmware::update_firmware,
))]
pub fn server() -> Result<(), RpcError> {
Ok(())
pub fn server() -> ParentHandler {
ParentHandler::new()
.subcommand(
"time",
from_fn_async(system::time)
.with_display_serializable()
.with_custom_display_fn::<AnyContext, _>(|handle, result| {
Ok(system::display_time(handle.params, result))
})
.with_remote_cli::<CliContext>(),
)
.subcommand("experimental", system::experimental())
.subcommand("logs", system::logs())
.subcommand("kernel-logs", system::kernel_logs())
.subcommand(
"metrics",
from_fn_async(system::metrics)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"shutdown",
from_fn_async(shutdown::shutdown)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"restart",
from_fn_async(shutdown::restart)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"rebuild",
from_fn_async(shutdown::rebuild)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"update",
from_fn_async(update::update_system)
.with_metadata("sync_db", Value::Bool(true))
.with_custom_display_fn::<AnyContext, _>(|handle, result| {
Ok(update::display_update_result(handle.params, result))
})
.with_remote_cli::<CliContext>(),
)
.subcommand(
"update-firmware",
from_fn_async(firmware::update_firmware)
.with_custom_display_fn::<AnyContext, _>(|_handle, result| {
Ok(firmware::display_firmware_update_result(result))
})
.with_remote_cli::<CliContext>(),
)
}
#[command(subcommands(
action::action,
install::install,
install::sideload,
install::uninstall,
install::list,
config::config,
control::start,
control::stop,
control::restart,
logs::logs,
properties::properties,
dependencies::dependency,
backup::package_backup,
))]
pub fn package() -> Result<(), RpcError> {
Ok(())
pub fn package() -> ParentHandler {
ParentHandler::new()
.subcommand(
"action",
from_fn_async(action::action)
.with_display_serializable()
.with_custom_display_fn::<AnyContext, _>(|handle, result| {
Ok(action::display_action_result(handle.params, result))
})
.with_remote_cli::<CliContext>(),
)
.subcommand(
"install",
from_fn_async(install::install)
.with_metadata("sync_db", Value::Bool(true))
.no_cli(),
)
.subcommand("sideload", from_fn_async(install::sideload).no_cli())
.subcommand("install", from_fn_async(install::cli_install).no_display())
.subcommand(
"uninstall",
from_fn_async(install::uninstall)
.with_metadata("sync_db", Value::Bool(true))
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"list",
from_fn_async(install::list)
.with_display_serializable()
.with_remote_cli::<CliContext>(),
)
.subcommand("config", config::config())
.subcommand(
"start",
from_fn_async(control::start)
.with_metadata("sync_db", Value::Bool(true))
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"stop",
from_fn_async(control::stop)
.with_metadata("sync_db", Value::Bool(true))
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand(
"restart",
from_fn_async(control::restart)
.with_metadata("sync_db", Value::Bool(true))
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand("logs", logs::logs())
.subcommand(
"properties",
from_fn_async(properties::properties)
.with_custom_display_fn::<AnyContext, _>(|_handle, result| {
Ok(properties::display_properties(result))
})
.with_remote_cli::<CliContext>(),
)
.subcommand("dependency", dependencies::dependency())
.subcommand("package-backup", backup::backup())
.subcommand("connect", from_fn_async(service::connect_rpc).no_cli())
.subcommand(
"connect",
from_fn_async(service::connect_rpc_cli).no_display(),
)
}
#[command(subcommands(
version::git_info,
s9pk::pack,
developer::verify,
developer::init,
inspect::inspect,
registry::admin::publish,
))]
pub fn portable_api() -> Result<(), RpcError> {
Ok(())
pub fn diagnostic_api() -> ParentHandler {
ParentHandler::new()
.subcommand(
"git-info",
from_fn(version::git_info).with_metadata("authenticated", Value::Bool(false)),
)
.subcommand("echo", from_fn(echo).with_remote_cli::<CliContext>())
.subcommand("diagnostic", diagnostic::diagnostic())
}
#[command(subcommands(version::git_info, echo, diagnostic::diagnostic))]
pub fn diagnostic_api() -> Result<(), RpcError> {
Ok(())
pub fn setup_api() -> ParentHandler {
ParentHandler::new()
.subcommand(
"git-info",
from_fn(version::git_info).with_metadata("authenticated", Value::Bool(false)),
)
.subcommand("echo", from_fn(echo).with_remote_cli::<CliContext>())
.subcommand("setup", setup::setup())
}
#[command(subcommands(version::git_info, echo, setup::setup))]
pub fn setup_api() -> Result<(), RpcError> {
Ok(())
}
#[command(subcommands(version::git_info, echo, os_install::install))]
pub fn install_api() -> Result<(), RpcError> {
Ok(())
pub fn install_api() -> ParentHandler {
ParentHandler::new()
.subcommand(
"git-info",
from_fn(version::git_info).with_metadata("authenticated", Value::Bool(false)),
)
.subcommand("echo", from_fn(echo).with_remote_cli::<CliContext>())
.subcommand("install", os_install::install())
}

View File

@@ -1,36 +1,28 @@
use std::future::Future;
use std::marker::PhantomData;
use std::ops::{Deref, DerefMut};
use std::process::Stdio;
use std::time::{Duration, UNIX_EPOCH};
use axum::extract::ws::{self, WebSocket};
use chrono::{DateTime, Utc};
use clap::Parser;
use color_eyre::eyre::eyre;
use futures::stream::BoxStream;
use futures::{FutureExt, SinkExt, Stream, StreamExt, TryStreamExt};
use hyper::upgrade::Upgraded;
use hyper::Error as HyperError;
use rpc_toolkit::command;
use models::PackageId;
use rpc_toolkit::yajrc::RpcError;
use rpc_toolkit::{command, from_fn_async, CallRemote, Empty, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use tokio::io::{AsyncBufReadExt, BufReader};
use tokio::process::{Child, Command};
use tokio::task::JoinError;
use tokio_stream::wrappers::LinesStream;
use tokio_tungstenite::tungstenite::protocol::frame::coding::CloseCode;
use tokio_tungstenite::tungstenite::protocol::CloseFrame;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::WebSocketStream;
use tracing::instrument;
use crate::context::{CliContext, RpcContext};
use crate::core::rpc_continuations::{RequestGuid, RpcContinuation};
use crate::error::ResultExt;
use crate::procedure::docker::DockerProcedure;
use crate::s9pk::manifest::PackageId;
use crate::util::display_none;
use crate::prelude::*;
use crate::util::serde::Reversible;
use crate::{Error, ErrorKind};
#[pin_project::pin_project]
pub struct LogStream {
@@ -65,21 +57,14 @@ impl Stream for LogStream {
}
#[instrument(skip_all)]
async fn ws_handler<
WSFut: Future<Output = Result<Result<WebSocketStream<Upgraded>, HyperError>, JoinError>>,
>(
async fn ws_handler(
first_entry: Option<LogEntry>,
mut logs: LogStream,
ws_fut: WSFut,
mut stream: WebSocket,
) -> Result<(), Error> {
let mut stream = ws_fut
.await
.with_kind(crate::ErrorKind::Network)?
.with_kind(crate::ErrorKind::Unknown)?;
if let Some(first_entry) = first_entry {
stream
.send(Message::Text(
.send(ws::Message::Text(
serde_json::to_string(&first_entry).with_kind(ErrorKind::Serialization)?,
))
.await
@@ -94,7 +79,7 @@ async fn ws_handler<
if let Some(entry) = entry {
let (_, log_entry) = entry.log_entry()?;
stream
.send(Message::Text(
.send(ws::Message::Text(
serde_json::to_string(&log_entry).with_kind(ErrorKind::Serialization)?,
))
.await
@@ -104,12 +89,13 @@ async fn ws_handler<
if !ws_closed {
stream
.close(Some(CloseFrame {
code: CloseCode::Normal,
.send(ws::Message::Close(Some(ws::CloseFrame {
code: ws::close_code::NORMAL,
reason: "Log Stream Finished".into(),
}))
})))
.await
.with_kind(ErrorKind::Network)?;
drop(stream);
}
Ok(())
@@ -224,23 +210,52 @@ pub enum LogSource {
pub const SYSTEM_UNIT: &str = "startd";
#[command(
custom_cli(cli_logs(async, context(CliContext))),
subcommands(self(logs_nofollow(async)), logs_follow),
display(display_none)
)]
pub async fn logs(
#[arg] id: PackageId,
#[arg(short = 'l', long = "limit")] limit: Option<usize>,
#[arg(short = 'c', long = "cursor")] cursor: Option<String>,
#[arg(short = 'B', long = "before", default)] before: bool,
#[arg(short = 'f', long = "follow", default)] follow: bool,
) -> Result<(PackageId, Option<usize>, Option<String>, bool, bool), Error> {
Ok((id, limit, cursor, before, follow))
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct LogsParam {
id: PackageId,
#[arg(short = 'l', long = "limit")]
limit: Option<usize>,
#[arg(short = 'c', long = "cursor")]
cursor: Option<String>,
#[arg(short = 'B', long = "before")]
#[serde(default)]
before: bool,
#[arg(short = 'f', long = "follow")]
#[serde(default)]
follow: bool,
}
pub fn logs() -> ParentHandler<LogsParam> {
ParentHandler::<LogsParam>::new()
.root_handler(
from_fn_async(cli_logs)
.no_display()
.with_inherited(|params, _| params),
)
.root_handler(
from_fn_async(logs_follow)
.with_inherited(|params, _| params)
.no_cli(),
)
.subcommand(
"follow",
from_fn_async(logs_follow)
.with_inherited(|params, _| params)
.no_cli(),
)
}
pub async fn cli_logs(
ctx: CliContext,
(id, limit, cursor, before, follow): (PackageId, Option<usize>, Option<String>, bool, bool),
_: Empty,
LogsParam {
id,
limit,
cursor,
before,
follow,
}: LogsParam,
) -> Result<(), RpcError> {
if follow {
if cursor.is_some() {
@@ -262,14 +277,21 @@ pub async fn cli_logs(
}
pub async fn logs_nofollow(
_ctx: (),
(id, limit, cursor, before, _): (PackageId, Option<usize>, Option<String>, bool, bool),
_: Empty,
LogsParam {
id,
limit,
cursor,
before,
..
}: LogsParam,
) -> Result<LogResponse, Error> {
fetch_logs(LogSource::Container(id), limit, cursor, before).await
}
#[command(rpc_only, rename = "follow", display(display_none))]
pub async fn logs_follow(
#[context] ctx: RpcContext,
#[parent_data] (id, limit, _, _, _): (PackageId, Option<usize>, Option<String>, bool, bool),
ctx: RpcContext,
_: Empty,
LogsParam { id, limit, .. }: LogsParam,
) -> Result<LogFollowResponse, Error> {
follow_logs(ctx, LogSource::Container(id), limit).await
}
@@ -282,19 +304,18 @@ pub async fn cli_logs_generic_nofollow(
cursor: Option<String>,
before: bool,
) -> Result<(), RpcError> {
let res = rpc_toolkit::command_helpers::call_remote(
ctx.clone(),
method,
serde_json::json!({
"id": id,
"limit": limit,
"cursor": cursor,
"before": before,
}),
PhantomData::<LogResponse>,
)
.await?
.result?;
let res = from_value::<LogResponse>(
ctx.call_remote(
method,
imbl_value::json!({
"id": id,
"limit": limit,
"cursor": cursor,
"before": before,
}),
)
.await?,
)?;
for entry in res.entries.iter() {
println!("{}", entry);
@@ -309,36 +330,18 @@ pub async fn cli_logs_generic_follow(
id: Option<PackageId>,
limit: Option<usize>,
) -> Result<(), RpcError> {
let res = rpc_toolkit::command_helpers::call_remote(
ctx.clone(),
method,
serde_json::json!({
"id": id,
"limit": limit,
}),
PhantomData::<LogFollowResponse>,
)
.await?
.result?;
let res = from_value::<LogFollowResponse>(
ctx.call_remote(
method,
imbl_value::json!({
"id": id,
"limit": limit,
}),
)
.await?,
)?;
let mut base_url = ctx.base_url.clone();
let ws_scheme = match base_url.scheme() {
"https" => "wss",
"http" => "ws",
_ => {
return Err(Error::new(
eyre!("Cannot parse scheme from base URL"),
crate::ErrorKind::ParseUrl,
)
.into())
}
};
base_url
.set_scheme(ws_scheme)
.map_err(|_| Error::new(eyre!("Cannot set URL scheme"), crate::ErrorKind::ParseUrl))?;
let (mut stream, _) =
// base_url is "http://127.0.0.1/", with a trailing slash, so we don't put a leading slash in this path:
tokio_tungstenite::connect_async(format!("{}ws/rpc/{}", base_url, res.guid)).await?;
let mut stream = ctx.ws_continuation(res.guid).await?;
while let Some(log) = stream.try_next().await? {
if let Message::Text(log) = log {
println!("{}", serde_json::from_str::<LogEntry>(&log)?);
@@ -376,15 +379,9 @@ pub async fn journalctl(
}
LogSource::Container(id) => {
#[cfg(not(feature = "docker"))]
cmd.arg(format!(
"SYSLOG_IDENTIFIER={}",
DockerProcedure::container_name(&id, None)
));
cmd.arg(format!("SYSLOG_IDENTIFIER={}.embassy", id));
#[cfg(feature = "docker")]
cmd.arg(format!(
"CONTAINER_NAME={}",
DockerProcedure::container_name(&id, None)
));
cmd.arg(format!("CONTAINER_NAME={}.embassy", id));
}
};
@@ -498,7 +495,16 @@ pub async fn follow_logs(
ctx.add_continuation(
guid.clone(),
RpcContinuation::ws(
Box::new(move |ws_fut| ws_handler(first_entry, stream, ws_fut).boxed()),
Box::new(move |socket| {
ws_handler(first_entry, stream, socket)
.map(|x| match x {
Ok(_) => (),
Err(e) => {
tracing::error!("Error in log stream: {}", e);
}
})
.boxed()
}),
Duration::from_secs(30),
),
)

View File

@@ -0,0 +1,19 @@
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.include = /usr/share/lxc/config/userns.conf
lxc.arch = linux64
# Container specific configuration
lxc.apparmor.profile = generated
lxc.apparmor.allow_nesting = 1
lxc.idmap = u 0 100000 65536
lxc.idmap = g 0 100000 65536
lxc.rootfs.path = dir:/var/lib/lxc/{guid}/rootfs
lxc.uts.name = {guid}
# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.rootfs.options = rshared

536
core/startos/src/lxc/mod.rs Normal file
View File

@@ -0,0 +1,536 @@
use std::collections::BTreeSet;
use std::ops::Deref;
use std::path::Path;
use std::sync::{Arc, Weak};
use std::time::Duration;
use clap::Parser;
use futures::{AsyncWriteExt, FutureExt, StreamExt};
use imbl_value::{InOMap, InternedString};
use rpc_toolkit::yajrc::{RpcError, RpcResponse};
use rpc_toolkit::{
from_fn_async, AnyContext, CallRemoteHandler, GenericRpcMethod, Handler, HandlerArgs,
HandlerExt, ParentHandler, RpcRequest,
};
use rustyline_async::{ReadlineEvent, SharedWriter};
use serde::{Deserialize, Serialize};
use tokio::fs::File;
use tokio::io::{AsyncBufReadExt, BufReader};
use tokio::process::Command;
use tokio::sync::Mutex;
use tokio::time::Instant;
use crate::context::{CliContext, RpcContext};
use crate::core::rpc_continuations::{RequestGuid, RpcContinuation};
use crate::disk::mount::filesystem::bind::Bind;
use crate::disk::mount::filesystem::block_dev::BlockDev;
use crate::disk::mount::filesystem::idmapped::IdMapped;
use crate::disk::mount::filesystem::overlayfs::OverlayGuard;
use crate::disk::mount::filesystem::ReadWrite;
use crate::disk::mount::guard::{GenericMountGuard, TmpMountGuard};
use crate::disk::mount::util::unmount;
use crate::prelude::*;
use crate::util::rpc_client::UnixRpcClient;
use crate::util::{new_guid, Invoke};
const LXC_CONTAINER_DIR: &str = "/var/lib/lxc";
const RPC_DIR: &str = "media/startos/rpc"; // must not be absolute path
pub const CONTAINER_RPC_SERVER_SOCKET: &str = "service.sock"; // must not be absolute path
pub const HOST_RPC_SERVER_SOCKET: &str = "host.sock"; // must not be absolute path
pub struct LxcManager {
containers: Mutex<Vec<Weak<InternedString>>>,
}
impl LxcManager {
pub fn new() -> Self {
Self {
containers: Default::default(),
}
}
pub async fn create(self: &Arc<Self>, config: LxcConfig) -> Result<LxcContainer, Error> {
let container = LxcContainer::new(self, config).await?;
let mut guard = self.containers.lock().await;
*guard = std::mem::take(&mut *guard)
.into_iter()
.filter(|g| g.strong_count() > 0)
.chain(std::iter::once(Arc::downgrade(&container.guid)))
.collect();
Ok(container)
}
pub async fn gc(&self) -> Result<(), Error> {
let expected = BTreeSet::from_iter(
self.containers
.lock()
.await
.iter()
.filter_map(|g| g.upgrade())
.map(|g| (&*g).clone()),
);
for container in String::from_utf8(
Command::new("lxc-ls")
.arg("-1")
.invoke(ErrorKind::Lxc)
.await?,
)?
.lines()
.map(|s| s.trim())
{
if !expected.contains(container) {
let rootfs_path = Path::new(LXC_CONTAINER_DIR).join(container).join("rootfs");
if tokio::fs::metadata(&rootfs_path).await.is_ok() {
unmount(Path::new(LXC_CONTAINER_DIR).join(container).join("rootfs")).await?;
if tokio_stream::wrappers::ReadDirStream::new(
tokio::fs::read_dir(&rootfs_path).await?,
)
.count()
.await
> 0
{
return Err(Error::new(
eyre!("rootfs is not empty, refusing to delete"),
ErrorKind::InvalidRequest,
));
}
}
Command::new("lxc-destroy")
.arg("--force")
.arg("--name")
.arg(container)
.invoke(ErrorKind::Lxc)
.await?;
}
}
Ok(())
}
}
pub struct LxcContainer {
manager: Weak<LxcManager>,
rootfs: OverlayGuard,
guid: Arc<InternedString>,
rpc_bind: TmpMountGuard,
config: LxcConfig,
exited: bool,
}
impl LxcContainer {
async fn new(manager: &Arc<LxcManager>, config: LxcConfig) -> Result<Self, Error> {
let guid = new_guid();
let container_dir = Path::new(LXC_CONTAINER_DIR).join(&*guid);
tokio::fs::create_dir_all(&container_dir).await?;
tokio::fs::write(
container_dir.join("config"),
format!(include_str!("./config.template"), guid = &*guid),
)
.await?;
// TODO: append config
let rootfs_dir = container_dir.join("rootfs");
tokio::fs::create_dir_all(&rootfs_dir).await?;
Command::new("chown")
.arg("100000:100000")
.arg(&rootfs_dir)
.invoke(ErrorKind::Filesystem)
.await?;
let rootfs = OverlayGuard::mount(
&IdMapped::new(
BlockDev::new("/usr/lib/startos/container-runtime/rootfs.squashfs"),
0,
100000,
65536,
),
&rootfs_dir,
)
.await?;
tokio::fs::write(rootfs_dir.join("etc/hostname"), format!("{guid}\n")).await?;
Command::new("sed")
.arg("-i")
.arg(format!("s/LXC_NAME/{guid}/g"))
.arg(rootfs_dir.join("etc/hosts"))
.invoke(ErrorKind::Filesystem)
.await?;
Command::new("mount")
.arg("--make-rshared")
.arg(rootfs.path())
.invoke(ErrorKind::Filesystem)
.await?;
let rpc_dir = rootfs_dir.join(RPC_DIR);
tokio::fs::create_dir_all(&rpc_dir).await?;
let rpc_bind = TmpMountGuard::mount(&Bind::new(rpc_dir), ReadWrite).await?;
Command::new("chown")
.arg("-R")
.arg("100000:100000")
.arg(rpc_bind.path())
.invoke(ErrorKind::Filesystem)
.await?;
Command::new("lxc-start")
.arg("-d")
.arg("--name")
.arg(&*guid)
.invoke(ErrorKind::Lxc)
.await?;
Ok(Self {
manager: Arc::downgrade(manager),
rootfs,
guid: Arc::new(guid),
rpc_bind,
config,
exited: false,
})
}
pub fn rootfs_dir(&self) -> &Path {
self.rootfs.path()
}
pub fn rpc_dir(&self) -> &Path {
self.rpc_bind.path()
}
#[instrument(skip_all)]
pub async fn exit(mut self) -> Result<(), Error> {
self.rpc_bind.take().unmount().await?;
self.rootfs.take().unmount(true).await?;
let rootfs_path = self.rootfs_dir();
let err_path = rootfs_path.join("var/log/containerRuntime.err");
if tokio::fs::metadata(&err_path).await.is_ok() {
let mut lines = BufReader::new(File::open(&err_path).await?).lines();
while let Some(line) = lines.next_line().await? {
let container = &**self.guid;
tracing::error!(container, "{}", line);
}
}
if tokio::fs::metadata(&rootfs_path).await.is_ok() {
if tokio_stream::wrappers::ReadDirStream::new(tokio::fs::read_dir(&rootfs_path).await?)
.count()
.await
> 0
{
return Err(Error::new(
eyre!("rootfs is not empty, refusing to delete"),
ErrorKind::InvalidRequest,
));
}
}
Command::new("lxc-destroy")
.arg("--force")
.arg("--name")
.arg(&**self.guid)
.invoke(ErrorKind::Lxc)
.await?;
self.exited = true;
Ok(())
}
pub async fn connect_rpc(&self, timeout: Option<Duration>) -> Result<UnixRpcClient, Error> {
let started = Instant::now();
let sock_path = self.rpc_dir().join(CONTAINER_RPC_SERVER_SOCKET);
while tokio::fs::metadata(&sock_path).await.is_err() {
if timeout.map_or(false, |t| started.elapsed() > t) {
return Err(Error::new(
eyre!("timed out waiting for socket"),
ErrorKind::Timeout,
));
}
tokio::time::sleep(Duration::from_millis(100)).await;
}
Ok(UnixRpcClient::new(sock_path))
}
}
impl Drop for LxcContainer {
fn drop(&mut self) {
if !self.exited {
tracing::warn!(
"Container {} was ungracefully dropped. Cleaning up dangling containers...",
&**self.guid
);
let rootfs = self.rootfs.take();
let guid = std::mem::take(&mut self.guid);
if let Some(manager) = self.manager.upgrade() {
tokio::spawn(async move {
if let Err(e) = async {
let err_path = rootfs.path().join("var/log/containerRuntime.err");
if tokio::fs::metadata(&err_path).await.is_ok() {
let mut lines = BufReader::new(File::open(&err_path).await?).lines();
while let Some(line) = lines.next_line().await? {
let container = &**guid;
tracing::error!(container, "{}", line);
}
}
Ok::<_, Error>(())
}
.await
{
tracing::error!("Error reading logs from crashed container: {e}");
tracing::debug!("{e:?}")
}
rootfs.unmount(true).await.unwrap();
drop(guid);
if let Err(e) = manager.gc().await {
tracing::error!("Error cleaning up dangling LXC containers: {e}");
tracing::debug!("{e:?}")
} else {
tracing::info!("Successfully cleaned up dangling LXC containers");
}
});
}
}
}
}
#[derive(Default, Serialize)]
pub struct LxcConfig {}
pub fn lxc() -> ParentHandler {
ParentHandler::new()
.subcommand(
"create",
from_fn_async(create).with_remote_cli::<CliContext>(),
)
.subcommand(
"list",
from_fn_async(list)
.with_custom_display_fn::<AnyContext, _>(|_, res| {
use prettytable::*;
let mut table = table!([bc => "GUID"]);
for guid in res {
table.add_row(row![&*guid]);
}
table.printstd();
Ok(())
})
.with_remote_cli::<CliContext>(),
)
.subcommand(
"remove",
from_fn_async(remove)
.no_display()
.with_remote_cli::<CliContext>(),
)
.subcommand("connect", from_fn_async(connect_rpc).no_cli())
.subcommand("connect", from_fn_async(connect_rpc_cli).no_display())
}
pub async fn create(ctx: RpcContext) -> Result<InternedString, Error> {
let container = ctx.lxc_manager.create(LxcConfig::default()).await?;
let guid = container.guid.deref().clone();
ctx.dev.lxc.lock().await.insert(guid.clone(), container);
Ok(guid)
}
pub async fn list(ctx: RpcContext) -> Result<Vec<InternedString>, Error> {
Ok(ctx.dev.lxc.lock().await.keys().cloned().collect())
}
#[derive(Deserialize, Serialize, Parser)]
pub struct RemoveParams {
pub guid: InternedString,
}
pub async fn remove(ctx: RpcContext, RemoveParams { guid }: RemoveParams) -> Result<(), Error> {
if let Some(container) = ctx.dev.lxc.lock().await.remove(&guid) {
container.exit().await?;
}
Ok(())
}
#[derive(Deserialize, Serialize, Parser)]
pub struct ConnectParams {
pub guid: InternedString,
}
pub async fn connect_rpc(
ctx: RpcContext,
ConnectParams { guid }: ConnectParams,
) -> Result<RequestGuid, Error> {
connect(
&ctx,
ctx.dev.lxc.lock().await.get(&guid).ok_or_else(|| {
Error::new(eyre!("No container with guid: {guid}"), ErrorKind::NotFound)
})?,
)
.await
}
pub async fn connect(ctx: &RpcContext, container: &LxcContainer) -> Result<RequestGuid, Error> {
use axum::extract::ws::Message;
let rpc = container.connect_rpc(Some(Duration::from_secs(30))).await?;
let guid = RequestGuid::new();
ctx.add_continuation(
guid.clone(),
RpcContinuation::ws(
Box::new(|mut ws| {
async move {
if let Err(e) = async {
loop {
match ws.next().await {
None => break,
Some(Ok(Message::Text(txt))) => {
let mut id = None;
let result = async {
let req: RpcRequest =
serde_json::from_str(&txt).map_err(|e| RpcError {
data: Some(serde_json::Value::String(
e.to_string(),
)),
..rpc_toolkit::yajrc::PARSE_ERROR
})?;
id = req.id;
rpc.request(req.method, req.params).await
}
.await;
ws.send(Message::Text(
serde_json::to_string(&RpcResponse::<GenericRpcMethod> {
id,
result,
})
.with_kind(ErrorKind::Serialization)?,
))
.await
.with_kind(ErrorKind::Network)?;
}
Some(Ok(_)) => (),
Some(Err(e)) => {
return Err(Error::new(e, ErrorKind::Network));
}
}
}
Ok::<_, Error>(())
}
.await
{
tracing::error!("{e}");
tracing::debug!("{e:?}");
}
}
.boxed()
}),
Duration::from_secs(30),
),
)
.await;
Ok(guid)
}
pub async fn connect_cli(ctx: &CliContext, guid: RequestGuid) -> Result<(), Error> {
use futures::SinkExt;
use tokio_tungstenite::tungstenite::Message;
let mut ws = ctx.ws_continuation(guid).await?;
let (mut input, mut output) =
rustyline_async::Readline::new("> ".into()).with_kind(ErrorKind::Filesystem)?;
async fn handle_message(
msg: Option<Result<Message, tokio_tungstenite::tungstenite::Error>>,
output: &mut SharedWriter,
) -> Result<bool, Error> {
match msg {
None => return Ok(true),
Some(Ok(Message::Text(txt))) => match serde_json::from_str::<RpcResponse>(&txt) {
Ok(RpcResponse { result: Ok(a), .. }) => {
output
.write_all(
(serde_json::to_string(&a).with_kind(ErrorKind::Serialization)? + "\n")
.as_bytes(),
)
.await?;
}
Ok(RpcResponse { result: Err(e), .. }) => {
let e: Error = e.into();
tracing::error!("{e}");
tracing::debug!("{e:?}");
}
Err(e) => {
tracing::error!("Error Parsing RPC response: {e}");
tracing::debug!("{e:?}");
}
},
Some(Ok(_)) => (),
Some(Err(e)) => {
return Err(Error::new(e, ErrorKind::Network));
}
};
Ok(false)
}
loop {
tokio::select! {
line = input.readline() => {
let line = line.with_kind(ErrorKind::Filesystem)?;
if let ReadlineEvent::Line(line) = line {
input.add_history_entry(line.clone());
if serde_json::from_str::<RpcRequest>(&line).is_ok() {
ws.send(Message::Text(line))
.await
.with_kind(ErrorKind::Network)?;
} else {
match shell_words::split(&line) {
Ok(command) => {
if let Some((method, rest)) = command.split_first() {
let mut params = InOMap::new();
for arg in rest {
if let Some((name, value)) = arg.split_once("=") {
params.insert(InternedString::intern(name), if value.is_empty() {
Value::Null
} else if let Ok(v) = serde_json::from_str(value) {
v
} else {
Value::String(Arc::new(value.into()))
});
} else {
tracing::error!("argument without a value: {arg}");
tracing::debug!("help: set the value of {arg} with `{arg}=...`");
continue;
}
}
ws.send(Message::Text(match serde_json::to_string(&RpcRequest {
id: None,
method: GenericRpcMethod::new(method.into()),
params: Value::Object(params),
}) {
Ok(a) => a,
Err(e) => {
tracing::error!("Error Serializing Request: {e}");
tracing::debug!("{e:?}");
continue;
}
})).await.with_kind(ErrorKind::Network)?;
if handle_message(ws.next().await, &mut output).await? {
break
}
}
}
Err(e) => {
tracing::error!("{e}");
tracing::debug!("{e:?}");
}
}
}
} else {
ws.send(Message::Close(None)).await.with_kind(ErrorKind::Network)?;
}
}
msg = ws.next() => {
if handle_message(msg, &mut output).await? {
break;
}
}
}
}
Ok(())
}
pub async fn connect_rpc_cli(
handle_args: HandlerArgs<CliContext, ConnectParams>,
) -> Result<(), Error> {
let ctx = handle_args.context.clone();
let guid = CallRemoteHandler::<CliContext, _>::new(from_fn_async(connect_rpc))
.handle_async(handle_args)
.await?;
connect_cli(&ctx, guid).await
}

View File

@@ -1,56 +0,0 @@
use models::OptionExt;
use tracing::instrument;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::s9pk::manifest::PackageId;
use crate::status::MainStatus;
use crate::Error;
/// So, this is used for a service to run a health check cycle, go out and run the health checks, and store those in the db
#[instrument(skip_all)]
pub async fn check(ctx: &RpcContext, id: &PackageId) -> Result<(), Error> {
let (manifest, started) = {
let peeked = ctx.db.peek().await;
let pde = peeked
.as_package_data()
.as_idx(id)
.or_not_found(id)?
.expect_as_installed()?;
let manifest = pde.as_installed().as_manifest().de()?;
let started = pde.as_installed().as_status().as_main().de()?.started();
(manifest, started)
};
let health_results = if let Some(started) = started {
tracing::debug!("Checking health of {}", id);
manifest
.health_checks
.check_all(ctx, started, id, &manifest.version, &manifest.volumes)
.await?
} else {
return Ok(());
};
ctx.db
.mutate(|v| {
let pde = v
.as_package_data_mut()
.as_idx_mut(id)
.or_not_found(id)?
.expect_as_installed_mut()?;
let status = pde.as_installed_mut().as_status_mut().as_main_mut();
if let MainStatus::Running { health: _, started } = status.de()? {
status.ser(&MainStatus::Running {
health: health_results.clone(),
started,
})?;
}
Ok(())
})
.await
}

View File

@@ -1,282 +0,0 @@
use std::sync::Arc;
use std::time::Duration;
use models::OptionExt;
use tokio::sync::watch;
use tokio::sync::watch::Sender;
use tracing::instrument;
use super::start_stop::StartStop;
use super::{manager_seed, run_main, ManagerPersistentContainer, RunMainResult};
use crate::prelude::*;
use crate::procedure::NoOutput;
use crate::s9pk::manifest::Manifest;
use crate::status::MainStatus;
use crate::util::NonDetachingJoinHandle;
use crate::Error;
pub type ManageContainerOverride = Arc<watch::Sender<Option<Override>>>;
pub type Override = MainStatus;
pub struct OverrideGuard {
override_main_status: Option<ManageContainerOverride>,
}
impl OverrideGuard {
pub fn drop(self) {}
}
impl Drop for OverrideGuard {
fn drop(&mut self) {
if let Some(override_main_status) = self.override_main_status.take() {
override_main_status.send_modify(|x| {
*x = None;
});
}
}
}
/// This is the thing describing the state machine actor for a service
/// state and current running/ desired states.
pub struct ManageContainer {
pub(super) current_state: Arc<watch::Sender<StartStop>>,
pub(super) desired_state: Arc<watch::Sender<StartStop>>,
_service: NonDetachingJoinHandle<()>,
_save_state: NonDetachingJoinHandle<()>,
override_main_status: ManageContainerOverride,
}
impl ManageContainer {
pub async fn new(
seed: Arc<manager_seed::ManagerSeed>,
persistent_container: ManagerPersistentContainer,
) -> Result<Self, Error> {
let current_state = Arc::new(watch::channel(StartStop::Stop).0);
let desired_state = Arc::new(
watch::channel::<StartStop>(
get_status(seed.ctx.db.peek().await, &seed.manifest).into(),
)
.0,
);
let override_main_status: ManageContainerOverride = Arc::new(watch::channel(None).0);
let service = tokio::spawn(create_service_manager(
desired_state.clone(),
seed.clone(),
current_state.clone(),
persistent_container,
))
.into();
let save_state = tokio::spawn(save_state(
desired_state.clone(),
current_state.clone(),
override_main_status.clone(),
seed.clone(),
))
.into();
Ok(ManageContainer {
current_state,
desired_state,
_service: service,
override_main_status,
_save_state: save_state,
})
}
/// Set override is used during something like a restart of a service. We want to show certain statuses be different
/// from the actual status of the service.
pub fn set_override(&self, override_status: Override) -> Result<OverrideGuard, Error> {
let status = Some(override_status);
if self.override_main_status.borrow().is_some() {
return Err(Error::new(
eyre!("Already have an override"),
ErrorKind::InvalidRequest,
));
}
self.override_main_status
.send_modify(|x| *x = status.clone());
Ok(OverrideGuard {
override_main_status: Some(self.override_main_status.clone()),
})
}
/// Set the override, but don't have a guard to revert it. Used only on the mananger to do a shutdown.
pub(super) async fn lock_state_forever(
&self,
seed: &manager_seed::ManagerSeed,
) -> Result<(), Error> {
let current_state = get_status(seed.ctx.db.peek().await, &seed.manifest);
self.override_main_status
.send_modify(|x| *x = Some(current_state));
Ok(())
}
/// We want to set the state of the service, like to start or stop
pub fn to_desired(&self, new_state: StartStop) {
self.desired_state.send_modify(|x| *x = new_state);
}
/// This is a tool to say wait for the service to be in a certain state.
pub async fn wait_for_desired(&self, new_state: StartStop) {
let mut current_state = self.current_state();
self.to_desired(new_state);
while *current_state.borrow() != new_state {
current_state.changed().await.unwrap_or_default();
}
}
/// Getter
pub fn current_state(&self) -> watch::Receiver<StartStop> {
self.current_state.subscribe()
}
/// Getter
pub fn desired_state(&self) -> watch::Receiver<StartStop> {
self.desired_state.subscribe()
}
}
async fn create_service_manager(
desired_state: Arc<Sender<StartStop>>,
seed: Arc<manager_seed::ManagerSeed>,
current_state: Arc<Sender<StartStop>>,
persistent_container: Arc<super::persistent_container::PersistentContainer>,
) {
let mut desired_state_receiver = desired_state.subscribe();
let mut running_service: Option<NonDetachingJoinHandle<()>> = None;
let seed = seed.clone();
loop {
let current: StartStop = *current_state.borrow();
let desired: StartStop = *desired_state_receiver.borrow();
match (current, desired) {
(StartStop::Start, StartStop::Start) => (),
(StartStop::Start, StartStop::Stop) => {
if let Err(err) = seed.stop_container().await {
tracing::error!("Could not stop container");
tracing::debug!("{:?}", err)
}
running_service = None;
current_state.send_modify(|x| *x = StartStop::Stop);
}
(StartStop::Stop, StartStop::Start) => starting_service(
current_state.clone(),
desired_state.clone(),
seed.clone(),
persistent_container.clone(),
&mut running_service,
),
(StartStop::Stop, StartStop::Stop) => (),
}
if desired_state_receiver.changed().await.is_err() {
tracing::error!("Desired state error");
break;
}
}
}
async fn save_state(
desired_state: Arc<Sender<StartStop>>,
current_state: Arc<Sender<StartStop>>,
override_main_status: ManageContainerOverride,
seed: Arc<manager_seed::ManagerSeed>,
) {
let mut desired_state_receiver = desired_state.subscribe();
let mut current_state_receiver = current_state.subscribe();
let mut override_main_status_receiver = override_main_status.subscribe();
loop {
let current: StartStop = *current_state_receiver.borrow();
let desired: StartStop = *desired_state_receiver.borrow();
let override_status = override_main_status_receiver.borrow().clone();
let status = match (override_status.clone(), current, desired) {
(Some(status), _, _) => status,
(_, StartStop::Start, StartStop::Start) => MainStatus::Running {
started: chrono::Utc::now(),
health: Default::default(),
},
(_, StartStop::Start, StartStop::Stop) => MainStatus::Stopping,
(_, StartStop::Stop, StartStop::Start) => MainStatus::Starting,
(_, StartStop::Stop, StartStop::Stop) => MainStatus::Stopped,
};
let manifest = &seed.manifest;
if let Err(err) = seed
.ctx
.db
.mutate(|db| set_status(db, manifest, &status))
.await
{
tracing::error!("Did not set status for {}", seed.container_name);
tracing::debug!("{:?}", err);
}
tokio::select! {
_ = desired_state_receiver.changed() =>{},
_ = current_state_receiver.changed() => {},
_ = override_main_status_receiver.changed() => {}
}
}
}
fn starting_service(
current_state: Arc<Sender<StartStop>>,
desired_state: Arc<Sender<StartStop>>,
seed: Arc<manager_seed::ManagerSeed>,
persistent_container: ManagerPersistentContainer,
running_service: &mut Option<NonDetachingJoinHandle<()>>,
) {
let set_stopped = { move || current_state.send_modify(|x| *x = StartStop::Stop) };
let running_main_loop = async move {
while desired_state.borrow().is_start() {
let result = persistent_container
.execute(models::ProcedureName::Main, Value::Null, None)
.await;
run_main(seed.clone()).await;
set_stopped();
run_main_log_result(result, seed.clone()).await;
}
};
*running_service = Some(tokio::spawn(running_main_loop).into());
}
async fn run_main_log_result(result: RunMainResult, seed: Arc<manager_seed::ManagerSeed>) {
match result {
Ok(Ok(NoOutput)) => (), // restart
Ok(Err(e)) => {
tracing::error!(
"The service {} has crashed with the following exit code: {}",
seed.manifest.id.clone(),
e.0
);
tokio::time::sleep(Duration::from_secs(15)).await;
}
Err(e) => {
tracing::error!("failed to start service: {}", e);
tracing::debug!("{:?}", e);
}
}
}
/// Used only in the mod where we are doing a backup
#[instrument(skip(db, manifest))]
pub(super) fn get_status(db: Peeked, manifest: &Manifest) -> MainStatus {
db.as_package_data()
.as_idx(&manifest.id)
.and_then(|x| x.as_installed())
.filter(|x| x.as_manifest().as_version().de().ok() == Some(manifest.version.clone()))
.and_then(|x| x.as_status().as_main().de().ok())
.unwrap_or(MainStatus::Stopped)
}
#[instrument(skip(db, manifest))]
fn set_status(db: &mut Peeked, manifest: &Manifest, main_status: &MainStatus) -> Result<(), Error> {
let Some(installed) = db
.as_package_data_mut()
.as_idx_mut(&manifest.id)
.or_not_found(&manifest.id)?
.as_installed_mut()
else {
return Ok(());
};
installed.as_status_mut().as_main_mut().ser(main_status)
}

View File

@@ -1,96 +0,0 @@
use std::collections::BTreeMap;
use std::sync::Arc;
use color_eyre::eyre::eyre;
use tokio::sync::RwLock;
use tracing::instrument;
use super::Manager;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::util::Version;
use crate::Error;
/// This is the structure to contain all the service managers
#[derive(Default)]
pub struct ManagerMap(RwLock<BTreeMap<(PackageId, Version), Arc<Manager>>>);
impl ManagerMap {
#[instrument(skip_all)]
pub async fn init(&self, ctx: RpcContext, peeked: Peeked) -> Result<(), Error> {
let mut res = BTreeMap::new();
for package in peeked.as_package_data().keys()? {
let man: Manifest = if let Some(manifest) = peeked
.as_package_data()
.as_idx(&package)
.and_then(|x| x.as_installed())
.map(|x| x.as_manifest().de())
{
manifest?
} else {
continue;
};
res.insert(
(package, man.version.clone()),
Arc::new(Manager::new(ctx.clone(), man).await?),
);
}
*self.0.write().await = res;
Ok(())
}
/// Used during the install process
#[instrument(skip_all)]
pub async fn add(&self, ctx: RpcContext, manifest: Manifest) -> Result<Arc<Manager>, Error> {
let mut lock = self.0.write().await;
let id = (manifest.id.clone(), manifest.version.clone());
if let Some(man) = lock.remove(&id) {
man.exit().await;
}
let manager = Arc::new(Manager::new(ctx.clone(), manifest).await?);
lock.insert(id, manager.clone());
Ok(manager)
}
/// This is ran during the cleanup, so when we are uninstalling the service
#[instrument(skip_all)]
pub async fn remove(&self, id: &(PackageId, Version)) {
if let Some(man) = self.0.write().await.remove(id) {
man.exit().await;
}
}
/// Used during a shutdown
#[instrument(skip_all)]
pub async fn empty(&self) -> Result<(), Error> {
let res =
futures::future::join_all(std::mem::take(&mut *self.0.write().await).into_iter().map(
|((id, version), man)| async move {
tracing::debug!("Manager for {}@{} shutting down", id, version);
man.shutdown().await?;
tracing::debug!("Manager for {}@{} is shutdown", id, version);
if let Err(e) = Arc::try_unwrap(man) {
tracing::trace!(
"Manager for {}@{} still has {} other open references",
id,
version,
Arc::strong_count(&e) - 1
);
}
Ok::<_, Error>(())
},
))
.await;
res.into_iter().fold(Ok(()), |res, x| match (res, x) {
(Ok(()), x) => x,
(Err(e), Ok(())) => Err(e),
(Err(e1), Err(e2)) => Err(Error::new(eyre!("{}, {}", e1.source, e2.source), e1.kind)),
})
}
#[instrument(skip_all)]
pub async fn get(&self, id: &(PackageId, Version)) -> Option<Arc<Manager>> {
self.0.read().await.get(id).cloned()
}
}

View File

@@ -1,37 +0,0 @@
use models::ErrorKind;
use crate::context::RpcContext;
use crate::procedure::docker::DockerProcedure;
use crate::procedure::PackageProcedure;
use crate::s9pk::manifest::Manifest;
use crate::util::docker::stop_container;
use crate::Error;
/// This is helper structure for a service, the seed of the data that is needed for the manager_container
pub struct ManagerSeed {
pub ctx: RpcContext,
pub manifest: Manifest,
pub container_name: String,
}
impl ManagerSeed {
pub async fn stop_container(&self) -> Result<(), Error> {
match stop_container(
&self.container_name,
match &self.manifest.main {
PackageProcedure::Docker(DockerProcedure {
sigterm_timeout: Some(sigterm_timeout),
..
}) => Some(**sigterm_timeout),
_ => None,
},
None,
)
.await
{
Err(e) if e.kind == ErrorKind::NotFound => (), // Already stopped
a => a?,
}
Ok(())
}
}

View File

@@ -1,854 +0,0 @@
use std::collections::{BTreeMap, BTreeSet};
use std::net::Ipv4Addr;
use std::sync::Arc;
use std::task::Poll;
use std::time::Duration;
use color_eyre::eyre::eyre;
use container_init::ProcessGroupId;
use futures::future::BoxFuture;
use futures::{Future, FutureExt, TryFutureExt};
use helpers::UnixRpcClient;
use models::{ErrorKind, OptionExt, PackageId};
use nix::sys::signal::Signal;
use persistent_container::PersistentContainer;
use rand::SeedableRng;
use serde::de::DeserializeOwned;
use sqlx::Connection;
use start_stop::StartStop;
use tokio::sync::watch::{self, Sender};
use tokio::sync::{oneshot, Mutex};
use tracing::instrument;
use transition_state::TransitionState;
use crate::backup::target::PackageBackupInfo;
use crate::backup::PackageBackupReport;
use crate::config::action::ConfigRes;
use crate::config::spec::ValueSpecPointer;
use crate::config::ConfigureContext;
use crate::context::RpcContext;
use crate::db::model::{CurrentDependencies, CurrentDependencyInfo};
use crate::dependencies::{
add_dependent_to_current_dependents_lists, compute_dependency_config_errs,
};
use crate::disk::mount::backup::BackupMountGuard;
use crate::disk::mount::guard::TmpMountGuard;
use crate::install::cleanup::remove_from_current_dependents_lists;
use crate::net::net_controller::NetService;
use crate::net::vhost::AlpnInfo;
use crate::prelude::*;
use crate::procedure::docker::{DockerContainer, DockerProcedure, LongRunning};
use crate::procedure::{NoOutput, ProcedureName};
use crate::s9pk::manifest::Manifest;
use crate::status::MainStatus;
use crate::util::docker::get_container_ip;
use crate::util::NonDetachingJoinHandle;
use crate::volume::Volume;
use crate::Error;
pub mod health;
mod manager_container;
mod manager_map;
pub mod manager_seed;
mod persistent_container;
mod start_stop;
mod transition_state;
pub use manager_map::ManagerMap;
use self::manager_container::{get_status, ManageContainer};
use self::manager_seed::ManagerSeed;
pub const HEALTH_CHECK_COOLDOWN_SECONDS: u64 = 15;
pub const HEALTH_CHECK_GRACE_PERIOD_SECONDS: u64 = 5;
type ManagerPersistentContainer = Arc<PersistentContainer>;
type BackupGuard = Arc<Mutex<BackupMountGuard<TmpMountGuard>>>;
pub enum BackupReturn {
Error(Error),
AlreadyRunning(PackageBackupReport),
Ran {
report: PackageBackupReport,
res: Result<PackageBackupInfo, Error>,
},
}
pub struct Gid {
next_gid: (watch::Sender<u32>, watch::Receiver<u32>),
main_gid: (
watch::Sender<ProcessGroupId>,
watch::Receiver<ProcessGroupId>,
),
}
impl Default for Gid {
fn default() -> Self {
Self {
next_gid: watch::channel(1),
main_gid: watch::channel(ProcessGroupId(1)),
}
}
}
impl Gid {
pub fn new_gid(&self) -> ProcessGroupId {
let mut previous = 0;
self.next_gid.0.send_modify(|x| {
previous = *x;
*x = previous + 1;
});
ProcessGroupId(previous)
}
pub fn new_main_gid(&self) -> ProcessGroupId {
let gid = self.new_gid();
self.main_gid.0.send(gid).unwrap_or_default();
gid
}
}
/// This is the controller of the services. Here is where we can control a service with a start, stop, restart, etc.
#[derive(Clone)]
pub struct Manager {
seed: Arc<ManagerSeed>,
manage_container: Arc<manager_container::ManageContainer>,
transition: Arc<watch::Sender<TransitionState>>,
persistent_container: ManagerPersistentContainer,
pub gid: Arc<Gid>,
}
impl Manager {
pub async fn new(ctx: RpcContext, manifest: Manifest) -> Result<Self, Error> {
let seed = Arc::new(ManagerSeed {
ctx,
container_name: DockerProcedure::container_name(&manifest.id, None),
manifest,
});
let persistent_container = Arc::new(PersistentContainer::init(&seed).await?);
let manage_container = Arc::new(
manager_container::ManageContainer::new(seed.clone(), persistent_container.clone())
.await?,
);
let (transition, _) = watch::channel(Default::default());
let transition = Arc::new(transition);
Ok(Self {
seed,
manage_container,
transition,
persistent_container,
gid: Default::default(),
})
}
/// awaiting this does not wait for the start to complete
pub async fn start(&self) {
if self._is_transition_restart() {
return;
}
self._transition_abort().await;
self.manage_container.to_desired(StartStop::Start);
}
/// awaiting this does not wait for the stop to complete
pub async fn stop(&self) {
self._transition_abort().await;
self.manage_container.to_desired(StartStop::Stop);
}
/// awaiting this does not wait for the restart to complete
pub async fn restart(&self) {
if self._is_transition_restart()
&& *self.manage_container.desired_state().borrow() == StartStop::Stop
{
return;
}
if self.manage_container.desired_state().borrow().is_start() {
self._transition_replace(self._transition_restart()).await;
}
}
/// awaiting this does not wait for the restart to complete
pub async fn configure(
&self,
configure_context: ConfigureContext,
) -> Result<BTreeMap<PackageId, String>, Error> {
if self._is_transition_restart() {
self._transition_abort().await;
} else if self._is_transition_backup() {
return Err(Error::new(
eyre!("Can't configure because service is backing up"),
ErrorKind::InvalidRequest,
));
}
let context = self.seed.ctx.clone();
let id = self.seed.manifest.id.clone();
let breakages = configure(context, id, configure_context).await?;
self.restart().await;
Ok(breakages)
}
/// awaiting this does not wait for the backup to complete
pub async fn backup(&self, backup_guard: BackupGuard) -> BackupReturn {
if self._is_transition_backup() {
return BackupReturn::AlreadyRunning(PackageBackupReport {
error: Some("Can't do backup because service is already backing up".to_owned()),
});
}
let (transition_state, done) = self._transition_backup(backup_guard);
self._transition_replace(transition_state).await;
done.await
}
pub async fn exit(&self) {
self._transition_abort().await;
self.manage_container
.wait_for_desired(StartStop::Stop)
.await;
}
/// A special exit that is overridden the start state, should only be called in the shutdown, where we remove other containers
async fn shutdown(&self) -> Result<(), Error> {
self.manage_container.lock_state_forever(&self.seed).await?;
self.exit().await;
Ok(())
}
/// Used when we want to shutdown the service
pub async fn signal(&self, signal: Signal) -> Result<(), Error> {
let gid = self.gid.clone();
send_signal(self, gid, signal).await
}
/// Used as a getter, but also used in procedure
pub fn rpc_client(&self) -> Arc<UnixRpcClient> {
self.persistent_container.rpc_client()
}
async fn _transition_abort(&self) {
self.transition
.send_replace(Default::default())
.abort()
.await;
}
async fn _transition_replace(&self, transition_state: TransitionState) {
self.transition.send_replace(transition_state).abort().await;
}
pub(super) fn perform_restart(&self) -> impl Future<Output = Result<(), Error>> + 'static {
let manage_container = self.manage_container.clone();
async move {
let restart_override = manage_container.set_override(MainStatus::Restarting)?;
manage_container.wait_for_desired(StartStop::Stop).await;
manage_container.wait_for_desired(StartStop::Start).await;
restart_override.drop();
Ok(())
}
}
fn _transition_restart(&self) -> TransitionState {
let transition = self.transition.clone();
let restart = self.perform_restart();
TransitionState::Restarting(
tokio::spawn(async move {
if let Err(err) = restart.await {
tracing::error!("Error restarting service: {}", err);
}
transition.send_replace(Default::default());
})
.into(),
)
}
fn perform_backup(
&self,
backup_guard: BackupGuard,
) -> impl Future<Output = Result<Result<PackageBackupInfo, Error>, Error>> {
let manage_container = self.manage_container.clone();
let seed = self.seed.clone();
async move {
let peek = seed.ctx.db.peek().await;
let state_reverter = DesiredStateReverter::new(manage_container.clone());
let override_guard =
manage_container.set_override(get_status(peek, &seed.manifest).backing_up())?;
manage_container.wait_for_desired(StartStop::Stop).await;
let backup_guard = backup_guard.lock().await;
let guard = backup_guard.mount_package_backup(&seed.manifest.id).await?;
let return_value = seed.manifest.backup.create(seed.clone()).await;
guard.unmount().await?;
drop(backup_guard);
let manifest_id = seed.manifest.id.clone();
seed.ctx
.db
.mutate(|db| {
if let Some(progress) = db
.as_server_info_mut()
.as_status_info_mut()
.as_backup_progress_mut()
.transpose_mut()
.and_then(|p| p.as_idx_mut(&manifest_id))
{
progress.as_complete_mut().ser(&true)?;
}
Ok(())
})
.await?;
state_reverter.revert().await;
override_guard.drop();
Ok::<_, Error>(return_value)
}
}
fn _transition_backup(
&self,
backup_guard: BackupGuard,
) -> (TransitionState, BoxFuture<BackupReturn>) {
let (send, done) = oneshot::channel();
let transition_state = self.transition.clone();
(
TransitionState::BackingUp(
tokio::spawn(
self.perform_backup(backup_guard)
.then(finish_up_backup_task(transition_state, send)),
)
.into(),
),
done.map_err(|err| Error::new(eyre!("Oneshot error: {err:?}"), ErrorKind::Unknown))
.map(flatten_backup_error)
.boxed(),
)
}
fn _is_transition_restart(&self) -> bool {
let transition = self.transition.borrow();
matches!(*transition, TransitionState::Restarting(_))
}
fn _is_transition_backup(&self) -> bool {
let transition = self.transition.borrow();
matches!(*transition, TransitionState::BackingUp(_))
}
pub async fn execute<O>(
&self,
name: ProcedureName,
input: Value,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error>
where
O: DeserializeOwned,
{
self.persistent_container
.execute(name, input, timeout)
.await
}
pub async fn sanboxed<O>(
&self,
name: ProcedureName,
input: Value,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error>
where
O: DeserializeOwned,
{
self.persistent_container
.sanboxed(name, input, timeout)
.await
}
pub async fn send_signal(&self, gid: Arc<Gid>, signal: Signal) -> Result<(), Error> {
self.persistent_container.send_signal(gid, signal).await
}
}
#[instrument(skip_all)]
async fn configure(
ctx: RpcContext,
id: PackageId,
mut configure_context: ConfigureContext,
) -> Result<BTreeMap<PackageId, String>, Error> {
let db = ctx.db.peek().await;
let id = &id;
let ctx = &ctx;
let overrides = &mut configure_context.overrides;
// fetch data from db
let manifest = db
.as_package_data()
.as_idx(id)
.or_not_found(id)?
.as_manifest()
.de()?;
// get current config and current spec
let ConfigRes {
config: old_config,
spec,
} = manifest
.config
.as_ref()
.or_not_found("Manifest config")?
.get(ctx, id, &manifest.version, &manifest.volumes)
.await?;
// determine new config to use
let mut config = if let Some(config) = configure_context.config.or_else(|| old_config.clone()) {
config
} else {
spec.gen(
&mut rand::rngs::StdRng::from_entropy(),
&configure_context.timeout,
)?
};
spec.validate(&manifest)?;
spec.matches(&config)?; // check that new config matches spec
// TODO Commit or not?
spec.update(ctx, &manifest, overrides, &mut config).await?; // dereference pointers in the new config
let manifest = db
.as_package_data()
.as_idx(id)
.or_not_found(id)?
.as_installed()
.or_not_found(id)?
.as_manifest()
.de()?;
let dependencies = &manifest.dependencies;
let mut current_dependencies: CurrentDependencies = CurrentDependencies(
dependencies
.0
.iter()
.filter_map(|(id, info)| {
if info.requirement.required() {
Some((id.clone(), CurrentDependencyInfo::default()))
} else {
None
}
})
.collect(),
);
for ptr in spec.pointers(&config)? {
match ptr {
ValueSpecPointer::Package(pkg_ptr) => {
if let Some(info) = current_dependencies.0.get_mut(pkg_ptr.package_id()) {
info.pointers.insert(pkg_ptr);
} else {
let id = pkg_ptr.package_id().to_owned();
let mut pointers = BTreeSet::new();
pointers.insert(pkg_ptr);
current_dependencies.0.insert(
id,
CurrentDependencyInfo {
pointers,
health_checks: BTreeSet::new(),
},
);
}
}
ValueSpecPointer::System(_) => (),
}
}
let action = manifest.config.as_ref().or_not_found(id)?;
let version = &manifest.version;
let volumes = &manifest.volumes;
if !configure_context.dry_run {
// run config action
let res = action
.set(ctx, id, version, dependencies, volumes, &config)
.await?;
// track dependencies with no pointers
for (package_id, health_checks) in res.depends_on.into_iter() {
if let Some(current_dependency) = current_dependencies.0.get_mut(&package_id) {
current_dependency.health_checks.extend(health_checks);
} else {
current_dependencies.0.insert(
package_id,
CurrentDependencyInfo {
pointers: BTreeSet::new(),
health_checks,
},
);
}
}
// track dependency health checks
current_dependencies = current_dependencies.map(|x| {
x.into_iter()
.filter(|(dep_id, _)| {
if dep_id != id && !manifest.dependencies.0.contains_key(dep_id) {
tracing::warn!("Illegal dependency specified: {}", dep_id);
false
} else {
true
}
})
.collect()
});
}
let dependency_config_errs =
compute_dependency_config_errs(ctx, &db, &manifest, &current_dependencies, overrides)
.await?;
// cache current config for dependents
configure_context
.overrides
.insert(id.clone(), config.clone());
// handle dependents
let dependents = db
.as_package_data()
.as_idx(id)
.or_not_found(id)?
.as_installed()
.or_not_found(id)?
.as_current_dependents()
.de()?;
for (dependent, _dep_info) in dependents.0.iter().filter(|(dep_id, _)| dep_id != &id) {
// check if config passes dependent check
if let Some(cfg) = db
.as_package_data()
.as_idx(dependent)
.or_not_found(dependent)?
.as_installed()
.or_not_found(dependent)?
.as_manifest()
.as_dependencies()
.as_idx(id)
.or_not_found(id)?
.as_config()
.de()?
{
let manifest = db
.as_package_data()
.as_idx(dependent)
.or_not_found(dependent)?
.as_installed()
.or_not_found(dependent)?
.as_manifest()
.de()?;
if let Err(error) = cfg
.check(
ctx,
dependent,
&manifest.version,
&manifest.volumes,
id,
&config,
)
.await?
{
configure_context.breakages.insert(dependent.clone(), error);
}
}
}
if !configure_context.dry_run {
return ctx
.db
.mutate(move |db| {
remove_from_current_dependents_lists(db, id, &current_dependencies)?;
add_dependent_to_current_dependents_lists(db, id, &current_dependencies)?;
current_dependencies.0.remove(id);
for (dep, errs) in db
.as_package_data_mut()
.as_entries_mut()?
.into_iter()
.filter_map(|(id, pde)| {
pde.as_installed_mut()
.map(|i| (id, i.as_status_mut().as_dependency_config_errors_mut()))
})
{
errs.remove(id)?;
if let Some(err) = configure_context.breakages.get(&dep) {
errs.insert(id, err)?;
}
}
let installed = db
.as_package_data_mut()
.as_idx_mut(id)
.or_not_found(id)?
.as_installed_mut()
.or_not_found(id)?;
installed
.as_current_dependencies_mut()
.ser(&current_dependencies)?;
let status = installed.as_status_mut();
status.as_configured_mut().ser(&true)?;
status
.as_dependency_config_errors_mut()
.ser(&dependency_config_errs)?;
Ok(configure_context.breakages)
})
.await; // add new
}
Ok(configure_context.breakages)
}
struct DesiredStateReverter {
manage_container: Option<Arc<ManageContainer>>,
starting_state: StartStop,
}
impl DesiredStateReverter {
fn new(manage_container: Arc<ManageContainer>) -> Self {
let starting_state = *manage_container.desired_state().borrow();
let manage_container = Some(manage_container);
Self {
starting_state,
manage_container,
}
}
async fn revert(mut self) {
if let Some(mut current_state) = self._revert() {
while *current_state.borrow() != self.starting_state {
current_state.changed().await.unwrap();
}
}
}
fn _revert(&mut self) -> Option<watch::Receiver<StartStop>> {
if let Some(manage_container) = self.manage_container.take() {
manage_container.to_desired(self.starting_state);
return Some(manage_container.desired_state());
}
None
}
}
impl Drop for DesiredStateReverter {
fn drop(&mut self) {
self._revert();
}
}
type BackupDoneSender = oneshot::Sender<Result<PackageBackupInfo, Error>>;
fn finish_up_backup_task(
transition: Arc<Sender<TransitionState>>,
send: BackupDoneSender,
) -> impl FnOnce(Result<Result<PackageBackupInfo, Error>, Error>) -> BoxFuture<'static, ()> {
move |result| {
async move {
transition.send_replace(Default::default());
send.send(match result {
Ok(a) => a,
Err(e) => Err(e),
})
.unwrap_or_default();
}
.boxed()
}
}
fn response_to_report(response: &Result<PackageBackupInfo, Error>) -> PackageBackupReport {
PackageBackupReport {
error: response.as_ref().err().map(|e| e.to_string()),
}
}
fn flatten_backup_error(input: Result<Result<PackageBackupInfo, Error>, Error>) -> BackupReturn {
match input {
Ok(a) => BackupReturn::Ran {
report: response_to_report(&a),
res: a,
},
Err(err) => BackupReturn::Error(err),
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub enum Status {
Starting,
Running,
Stopped,
Paused,
Shutdown,
}
#[derive(Debug, Clone, Copy)]
pub enum OnStop {
Restart,
Sleep,
Exit,
}
type RunMainResult = Result<Result<NoOutput, (i32, String)>, Error>;
#[instrument(skip_all)]
async fn run_main(seed: Arc<ManagerSeed>) -> RunMainResult {
let runtime = NonDetachingJoinHandle::from(tokio::spawn(execute_main(seed.clone())));
let health = main_health_check_daemon(seed.clone());
let res = tokio::select! {
a = runtime => a.map_err(|_| Error::new(eyre!("Manager runtime panicked!"), crate::ErrorKind::Docker)).and_then(|a| a),
_ = health => Err(Error::new(eyre!("Health check daemon exited!"), crate::ErrorKind::Unknown))
};
res
}
/// We want to start up the manifest, but in this case we want to know that we have generated the certificates.
/// Note for _generated_certificate: Needed to know that before we start the state we have generated the certificate
async fn execute_main(seed: Arc<ManagerSeed>) -> Result<Result<NoOutput, (i32, String)>, Error> {
seed.manifest
.main
.execute::<(), NoOutput>(
&seed.ctx,
&seed.manifest.id,
&seed.manifest.version,
ProcedureName::Main,
&seed.manifest.volumes,
None,
None,
)
.await
}
async fn long_running_docker(
seed: &ManagerSeed,
container: &DockerContainer,
) -> Result<(LongRunning, UnixRpcClient), Error> {
container
.long_running_execute(
&seed.ctx,
&seed.manifest.id,
&seed.manifest.version,
&seed.manifest.volumes,
)
.await
}
enum GetRunningIp {
Ip(Ipv4Addr),
Error(Error),
EarlyExit(Result<NoOutput, (i32, String)>),
}
async fn get_long_running_ip(seed: &ManagerSeed, runtime: &mut LongRunning) -> GetRunningIp {
loop {
match get_container_ip(&seed.container_name).await {
Ok(Some(ip_addr)) => return GetRunningIp::Ip(ip_addr),
Ok(None) => (),
Err(e) if e.kind == ErrorKind::NotFound => (),
Err(e) => return GetRunningIp::Error(e),
}
if let Poll::Ready(res) = futures::poll!(&mut runtime.running_output) {
match res {
Ok(_) => return GetRunningIp::EarlyExit(Ok(NoOutput)),
Err(_e) => {
return GetRunningIp::Error(Error::new(
eyre!("Manager runtime panicked!"),
crate::ErrorKind::Docker,
))
}
}
}
}
}
#[instrument(skip(seed))]
async fn add_network_for_main(
seed: &ManagerSeed,
ip: std::net::Ipv4Addr,
) -> Result<NetService, Error> {
let mut svc = seed
.ctx
.net_controller
.create_service(seed.manifest.id.clone(), ip)
.await?;
// DEPRECATED
let mut secrets = seed.ctx.secret_store.acquire().await?;
let mut tx = secrets.begin().await?;
for (id, interface) in &seed.manifest.interfaces.0 {
for (external, internal) in interface.lan_config.iter().flatten() {
svc.add_lan(
tx.as_mut(),
id.clone(),
external.0,
internal.internal,
Err(AlpnInfo::Specified(vec![])),
)
.await?;
}
for (external, internal) in interface.tor_config.iter().flat_map(|t| &t.port_mapping) {
svc.add_tor(tx.as_mut(), id.clone(), external.0, internal.0)
.await?;
}
}
for volume in seed.manifest.volumes.values() {
if let Volume::Certificate { interface_id } = volume {
svc.export_cert(tx.as_mut(), interface_id, ip.into())
.await?;
}
}
tx.commit().await?;
Ok(svc)
}
#[instrument(skip(svc))]
async fn remove_network_for_main(svc: NetService) -> Result<(), Error> {
svc.remove_all().await
}
async fn main_health_check_daemon(seed: Arc<ManagerSeed>) {
tokio::time::sleep(Duration::from_secs(HEALTH_CHECK_GRACE_PERIOD_SECONDS)).await;
loop {
if let Err(e) = health::check(&seed.ctx, &seed.manifest.id).await {
tracing::error!(
"Failed to run health check for {}: {}",
&seed.manifest.id,
e
);
tracing::debug!("{:?}", e);
}
tokio::time::sleep(Duration::from_secs(HEALTH_CHECK_COOLDOWN_SECONDS)).await;
}
}
type RuntimeOfCommand = NonDetachingJoinHandle<Result<Result<NoOutput, (i32, String)>, Error>>;
#[instrument(skip(seed, runtime))]
async fn get_running_ip(seed: &ManagerSeed, mut runtime: &mut RuntimeOfCommand) -> GetRunningIp {
loop {
match get_container_ip(&seed.container_name).await {
Ok(Some(ip_addr)) => return GetRunningIp::Ip(ip_addr),
Ok(None) => (),
Err(e) if e.kind == ErrorKind::NotFound => (),
Err(e) => return GetRunningIp::Error(e),
}
if let Poll::Ready(res) = futures::poll!(&mut runtime) {
match res {
Ok(Ok(response)) => return GetRunningIp::EarlyExit(response),
Err(e) => {
return GetRunningIp::Error(Error::new(
match e.try_into_panic() {
Ok(e) => {
eyre!(
"Manager runtime panicked: {}",
e.downcast_ref::<&'static str>().unwrap_or(&"UNKNOWN")
)
}
_ => eyre!("Manager runtime cancelled!"),
},
crate::ErrorKind::Docker,
))
}
Ok(Err(e)) => {
return GetRunningIp::Error(Error::new(
eyre!("Manager runtime returned error: {}", e),
crate::ErrorKind::Docker,
))
}
}
}
}
}
async fn send_signal(manager: &Manager, gid: Arc<Gid>, signal: Signal) -> Result<(), Error> {
manager.send_signal(gid, signal).await
}

View File

@@ -1,187 +0,0 @@
use std::sync::Arc;
use std::time::Duration;
use color_eyre::eyre::eyre;
use helpers::UnixRpcClient;
use models::ProcedureName;
use nix::sys::signal::Signal;
use serde::de::DeserializeOwned;
use tokio::sync::watch::{self, Receiver};
use tokio::sync::{oneshot, Mutex};
use tracing::instrument;
use super::manager_seed::ManagerSeed;
use super::{
add_network_for_main, get_long_running_ip, long_running_docker, remove_network_for_main,
GetRunningIp,
};
use crate::prelude::*;
use crate::procedure::docker::DockerContainer;
use crate::util::NonDetachingJoinHandle;
struct ProcedureId(u64);
// @DRB Need to have a way of starting the the procudures and getting the information back
// @DRB On top of this we need to also have the procedures to have the effects and get the results back for them, maybe lock them to the running instance?
/// Persistant container are the old containers that need to run all the time
/// The goal is that all services will be persistent containers, waiting to run the main system.
pub struct PersistentContainer {
_running_docker: NonDetachingJoinHandle<()>,
// TODO: Drb: Implement to spec https://github.com/Start9Labs/start-sdk/blob/master/lib/types.ts#L223
pub rpc_client: Receiver<Arc<UnixRpcClient>>,
manager_seed: Arc<ManagerSeed>,
procedures: Mutex<Vec<(ProcedureName, ProcedureId)>>,
}
impl PersistentContainer {
#[instrument(skip_all)]
pub async fn init(seed: &Arc<ManagerSeed>) -> Result<Self, Error> {
Ok(if let Some(containers) = &seed.manifest.containers {
let (running_docker, rpc_client) =
spawn_persistent_container(seed.clone(), containers.main.clone()).await?;
Self {
_running_docker: running_docker,
rpc_client,
manager_seed: seed.clone(),
procedures: Default::default(),
}
} else {
todo!("DRB No containers in manifest")
})
}
pub fn rpc_client(&self) -> Arc<UnixRpcClient> {
self.rpc_client.borrow().clone()
}
pub async fn execute<O>(
&self,
name: ProcedureName,
input: Value,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error>
where
O: DeserializeOwned,
{
match self._execute(name, input, timeout).await {
Ok(Ok(a)) => Ok(Ok(imbl_value::from_value(a).map_err(|e| {
Error::new(
eyre!("Error deserializing output: {}", e),
crate::ErrorKind::Deserialization,
)
})?)),
Ok(Err(e)) => Ok(Err(e)),
Err(e) => Err(e),
}
}
pub async fn sanboxed<O>(
&self,
name: ProcedureName,
input: Value,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error>
where
O: DeserializeOwned,
{
match self._sandboxed(name, input, timeout).await {
Ok(Ok(a)) => Ok(Ok(imbl_value::from_value(a).map_err(|e| {
Error::new(
eyre!("Error deserializing output: {}", e),
crate::ErrorKind::Deserialization,
)
})?)),
Ok(Err(e)) => Ok(Err(e)),
Err(e) => Err(e),
}
}
async fn _execute(
&self,
name: ProcedureName,
input: Value,
timeout: Option<Duration>,
) -> Result<Result<Value, (i32, String)>, Error> {
todo!(
r#"""
DRB
Call into the persistant via rpc, start a procedure.
Procedure already has access to rpc to call back, maybe an id to track?
Should be able to cancel.
Note(Main): Only one should be running at a time
Note(Main): Has additional effect of setRunning
Note: The input (Option<I>) is not generic because we don't want to clone this fn for each type of input
Note: The output is not generic because we don't want to clone this fn for each type of output
"""#
)
}
async fn _sandboxed(
&self,
name: ProcedureName,
input: Value,
timeout: Option<Duration>,
) -> Result<Result<Value, (i32, String)>, Error> {
todo!("DRB")
}
pub async fn send_signal(&self, gid: Arc<super::Gid>, signal: Signal) -> Result<(), Error> {
todo!("DRB")
}
}
pub async fn spawn_persistent_container(
seed: Arc<ManagerSeed>,
container: DockerContainer,
) -> Result<(NonDetachingJoinHandle<()>, Receiver<Arc<UnixRpcClient>>), Error> {
let (send_inserter, inserter) = oneshot::channel();
Ok((
tokio::task::spawn(async move {
let mut inserter_send: Option<watch::Sender<Arc<UnixRpcClient>>> = None;
let mut send_inserter: Option<oneshot::Sender<Receiver<Arc<UnixRpcClient>>>> = Some(send_inserter);
loop {
if let Err(e) = async {
let (mut runtime, inserter) =
long_running_docker(&seed, &container).await?;
let ip = match get_long_running_ip(&seed, &mut runtime).await {
GetRunningIp::Ip(x) => x,
GetRunningIp::Error(e) => return Err(e),
GetRunningIp::EarlyExit(e) => {
tracing::error!("Early Exit");
tracing::debug!("{:?}", e);
return Ok(());
}
};
let svc = add_network_for_main(&seed, ip).await?;
if let Some(inserter_send) = inserter_send.as_mut() {
let _ = inserter_send.send(Arc::new(inserter));
} else {
let (s, r) = watch::channel(Arc::new(inserter));
inserter_send = Some(s);
if let Some(send_inserter) = send_inserter.take() {
let _ = send_inserter.send(r);
}
}
let res = tokio::select! {
a = runtime.running_output => a.map_err(|_| Error::new(eyre!("Manager runtime panicked!"), crate::ErrorKind::Docker)).map(|_| ()),
};
remove_network_for_main(svc).await?;
res
}.await {
tracing::error!("Error in persistent container: {}", e);
tracing::debug!("{:?}", e);
} else {
break;
}
tokio::time::sleep(Duration::from_millis(200)).await;
}
})
.into(),
inserter.await.map_err(|_| Error::new(eyre!("Container handle dropped before inserter sent"), crate::ErrorKind::Unknown))?,
))
}

View File

@@ -1,35 +0,0 @@
use helpers::NonDetachingJoinHandle;
/// Used only in the manager/mod and is used to keep track of the state of the manager during the
/// transitional states
pub(super) enum TransitionState {
BackingUp(NonDetachingJoinHandle<()>),
Restarting(NonDetachingJoinHandle<()>),
None,
}
impl TransitionState {
pub(super) fn take(&mut self) -> Self {
std::mem::take(self)
}
pub(super) fn into_join_handle(self) -> Option<NonDetachingJoinHandle<()>> {
Some(match self {
TransitionState::BackingUp(a) => a,
TransitionState::Restarting(a) => a,
TransitionState::None => return None,
})
}
pub(super) async fn abort(&mut self) {
if let Some(s) = self.take().into_join_handle() {
if s.wait_for_abort().await.is_ok() {
tracing::trace!("transition completed before abort");
}
}
}
}
impl Default for TransitionState {
fn default() -> Self {
TransitionState::None
}
}

View File

@@ -2,32 +2,34 @@ use std::borrow::Borrow;
use std::sync::Arc;
use std::time::{Duration, Instant};
use axum::extract::Request;
use axum::response::Response;
use basic_cookies::Cookie;
use color_eyre::eyre::eyre;
use digest::Digest;
use futures::future::BoxFuture;
use futures::FutureExt;
use http::StatusCode;
use rpc_toolkit::command_helpers::prelude::RequestParts;
use rpc_toolkit::hyper::header::COOKIE;
use rpc_toolkit::hyper::http::Error as HttpError;
use rpc_toolkit::hyper::{Body, Request, Response};
use rpc_toolkit::rpc_server_helpers::{
noop4, to_response, DynMiddleware, DynMiddlewareStage2, DynMiddlewareStage3,
};
use rpc_toolkit::yajrc::RpcMethod;
use rpc_toolkit::Metadata;
use helpers::const_true;
use http::header::COOKIE;
use http::HeaderValue;
use imbl_value::InternedString;
use rpc_toolkit::yajrc::INTERNAL_ERROR;
use rpc_toolkit::{Middleware, RpcRequest, RpcResponse};
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use tokio::sync::Mutex;
use crate::context::RpcContext;
use crate::{Error, ResultExt};
use crate::prelude::*;
pub const LOCAL_AUTH_COOKIE_PATH: &str = "/run/embassy/rpc.authcookie";
#[derive(Deserialize, Serialize)]
#[serde(rename_all = "kebab-case")]
pub struct LoginRes {
pub session: InternedString,
}
pub trait AsLogoutSessionId {
fn as_logout_session_id(self) -> String;
fn as_logout_session_id(self) -> InternedString;
}
/// Will need to know when we have logged out from a route
@@ -43,13 +45,14 @@ impl HasLoggedOutSessions {
let mut sqlx_conn = ctx.secret_store.acquire().await?;
for session in logged_out_sessions {
let session = session.as_logout_session_id();
let session = &*session;
sqlx::query!(
"UPDATE session SET logged_out = CURRENT_TIMESTAMP WHERE id = $1",
session
)
.execute(sqlx_conn.as_mut())
.await?;
for socket in open_authed_websockets.remove(&session).unwrap_or_default() {
for socket in open_authed_websockets.remove(session).unwrap_or_default() {
let _ = socket.send(());
}
}
@@ -58,15 +61,21 @@ impl HasLoggedOutSessions {
}
/// Used when we need to know that we have logged in with a valid user
#[derive(Clone, Copy)]
pub struct HasValidSession(());
#[derive(Clone)]
pub struct HasValidSession(SessionType);
#[derive(Clone)]
enum SessionType {
Local,
Session(HashSessionToken),
}
impl HasValidSession {
pub async fn from_request_parts(
request_parts: &RequestParts,
pub async fn from_header(
header: Option<&HeaderValue>,
ctx: &RpcContext,
) -> Result<Self, Error> {
if let Some(cookie_header) = request_parts.headers.get(COOKIE) {
if let Some(cookie_header) = header {
let cookies = Cookie::parse(
cookie_header
.to_str()
@@ -79,7 +88,7 @@ impl HasValidSession {
}
}
if let Some(cookie) = cookies.iter().find(|c| c.get_name() == "session") {
if let Ok(s) = Self::from_session(&HashSessionToken::from_cookie(cookie), ctx).await
if let Ok(s) = Self::from_session(HashSessionToken::from_cookie(cookie), ctx).await
{
return Ok(s);
}
@@ -91,8 +100,11 @@ impl HasValidSession {
))
}
pub async fn from_session(session: &HashSessionToken, ctx: &RpcContext) -> Result<Self, Error> {
let session_hash = session.hashed();
pub async fn from_session(
session_token: HashSessionToken,
ctx: &RpcContext,
) -> Result<Self, Error> {
let session_hash = session_token.hashed();
let session = sqlx::query!("UPDATE session SET last_active = CURRENT_TIMESTAMP WHERE id = $1 AND logged_out IS NULL OR logged_out > CURRENT_TIMESTAMP", session_hash)
.execute(ctx.secret_store.acquire().await?.as_mut())
.await?;
@@ -102,13 +114,13 @@ impl HasValidSession {
crate::ErrorKind::Authorization,
));
}
Ok(Self(()))
Ok(Self(SessionType::Session(session_token)))
}
pub async fn from_local(local: &Cookie<'_>) -> Result<Self, Error> {
let token = tokio::fs::read_to_string(LOCAL_AUTH_COOKIE_PATH).await?;
if local.get_value() == &*token {
Ok(Self(()))
Ok(Self(SessionType::Local))
} else {
Err(Error::new(
eyre!("UNAUTHORIZED"),
@@ -122,27 +134,31 @@ impl HasValidSession {
/// Or when we are using internal valid authenticated service.
#[derive(Debug, Clone)]
pub struct HashSessionToken {
hashed: String,
token: String,
hashed: InternedString,
token: InternedString,
}
impl HashSessionToken {
pub fn new() -> Self {
let token = base32::encode(
base32::Alphabet::RFC4648 { padding: false },
&rand::random::<[u8; 16]>(),
)
.to_lowercase();
let hashed = Self::hash(&token);
Self { hashed, token }
Self::from_token(InternedString::intern(
base32::encode(
base32::Alphabet::RFC4648 { padding: false },
&rand::random::<[u8; 16]>(),
)
.to_lowercase(),
))
}
pub fn from_cookie(cookie: &Cookie) -> Self {
let token = cookie.get_value().to_owned();
let hashed = Self::hash(&token);
pub fn from_token(token: InternedString) -> Self {
let hashed = Self::hash(&*token);
Self { hashed, token }
}
pub fn from_request_parts(request_parts: &RequestParts) -> Result<Self, Error> {
if let Some(cookie_header) = request_parts.headers.get(COOKIE) {
pub fn from_cookie(cookie: &Cookie) -> Self {
Self::from_token(InternedString::intern(cookie.get_value()))
}
pub fn from_header(header: Option<&HeaderValue>) -> Result<Self, Error> {
if let Some(cookie_header) = header {
let cookies = Cookie::parse(
cookie_header
.to_str()
@@ -159,33 +175,30 @@ impl HashSessionToken {
))
}
pub fn header_value(&self) -> Result<http::HeaderValue, Error> {
http::HeaderValue::from_str(&format!(
"session={}; Path=/; SameSite=Lax; Expires=Fri, 31 Dec 9999 23:59:59 GMT;",
self.token
))
.with_kind(crate::ErrorKind::Unknown)
pub fn to_login_res(&self) -> LoginRes {
LoginRes {
session: self.token.clone(),
}
}
pub fn hashed(&self) -> &str {
self.hashed.as_str()
&*self.hashed
}
pub fn as_hash(self) -> String {
self.hashed
}
fn hash(token: &str) -> String {
fn hash(token: &str) -> InternedString {
let mut hasher = Sha256::new();
hasher.update(token.as_bytes());
base32::encode(
base32::Alphabet::RFC4648 { padding: false },
hasher.finalize().as_slice(),
InternedString::intern(
base32::encode(
base32::Alphabet::RFC4648 { padding: false },
hasher.finalize().as_slice(),
)
.to_lowercase(),
)
.to_lowercase()
}
}
impl AsLogoutSessionId for HashSessionToken {
fn as_logout_session_id(self) -> String {
fn as_logout_session_id(self) -> InternedString {
self.hashed
}
}
@@ -205,80 +218,120 @@ impl Ord for HashSessionToken {
self.hashed.cmp(&other.hashed)
}
}
impl Borrow<String> for HashSessionToken {
fn borrow(&self) -> &String {
&self.hashed
impl Borrow<str> for HashSessionToken {
fn borrow(&self) -> &str {
&*self.hashed
}
}
pub fn auth<M: Metadata>(ctx: RpcContext) -> DynMiddleware<M> {
let rate_limiter = Arc::new(Mutex::new((0_usize, Instant::now())));
Box::new(
move |req: &mut Request<Body>,
metadata: M|
-> BoxFuture<Result<Result<DynMiddlewareStage2, Response<Body>>, HttpError>> {
let ctx = ctx.clone();
let rate_limiter = rate_limiter.clone();
async move {
let mut header_stub = Request::new(Body::empty());
*header_stub.headers_mut() = req.headers().clone();
let m2: DynMiddlewareStage2 = Box::new(move |req, rpc_req| {
async move {
if let Err(e) = HasValidSession::from_request_parts(req, &ctx).await {
if metadata
.get(rpc_req.method.as_str(), "authenticated")
.unwrap_or(true)
{
let (res_parts, _) = Response::new(()).into_parts();
return Ok(Err(to_response(
&req.headers,
res_parts,
Err(e.into()),
|_| StatusCode::OK,
)?));
} else if rpc_req.method.as_str() == "auth.login" {
let guard = rate_limiter.lock().await;
if guard.1.elapsed() < Duration::from_secs(20) {
if guard.0 >= 3 {
let (res_parts, _) = Response::new(()).into_parts();
return Ok(Err(to_response(
&req.headers,
res_parts,
Err(Error::new(
eyre!(
"Please limit login attempts to 3 per 20 seconds."
),
crate::ErrorKind::RateLimited,
)
.into()),
|_| StatusCode::OK,
)?));
}
}
}
}
let m3: DynMiddlewareStage3 = Box::new(move |_, res| {
async move {
let mut guard = rate_limiter.lock().await;
if guard.1.elapsed() < Duration::from_secs(20) {
if res.is_err() {
guard.0 += 1;
}
} else {
guard.0 = 0;
}
guard.1 = Instant::now();
Ok(Ok(noop4()))
}
.boxed()
});
Ok(Ok(m3))
}
.boxed()
});
Ok(Ok(m2))
}
.boxed()
},
)
#[derive(Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct Metadata {
#[serde(default = "const_true")]
authenticated: bool,
#[serde(default)]
login: bool,
#[serde(default)]
get_session: bool,
}
#[derive(Clone)]
pub struct Auth {
rate_limiter: Arc<Mutex<(usize, Instant)>>,
cookie: Option<HeaderValue>,
is_login: bool,
set_cookie: Option<HeaderValue>,
}
impl Auth {
pub fn new() -> Self {
Self {
rate_limiter: Arc::new(Mutex::new((0, Instant::now()))),
cookie: None,
is_login: false,
set_cookie: None,
}
}
}
#[async_trait::async_trait]
impl Middleware<RpcContext> for Auth {
type Metadata = Metadata;
async fn process_http_request(
&mut self,
_: &RpcContext,
request: &mut Request,
) -> Result<(), Response> {
self.cookie = request.headers_mut().get(COOKIE).cloned();
Ok(())
}
async fn process_rpc_request(
&mut self,
context: &RpcContext,
metadata: Self::Metadata,
request: &mut RpcRequest,
) -> Result<(), RpcResponse> {
if metadata.login {
self.is_login = true;
let guard = self.rate_limiter.lock().await;
if guard.1.elapsed() < Duration::from_secs(20) && guard.0 >= 3 {
return Err(RpcResponse {
id: request.id.take(),
result: Err(Error::new(
eyre!("Please limit login attempts to 3 per 20 seconds."),
crate::ErrorKind::RateLimited,
)
.into()),
});
}
} else if metadata.authenticated {
match HasValidSession::from_header(self.cookie.as_ref(), &context).await {
Err(e) => {
return Err(RpcResponse {
id: request.id.take(),
result: Err(e.into()),
})
}
Ok(HasValidSession(SessionType::Session(s))) if metadata.get_session => {
request.params["session"] = Value::String(Arc::new(s.hashed().into()));
// TODO: will this panic?
}
_ => (),
}
}
Ok(())
}
async fn process_rpc_response(&mut self, _: &RpcContext, response: &mut RpcResponse) {
if self.is_login {
let mut guard = self.rate_limiter.lock().await;
if guard.1.elapsed() < Duration::from_secs(20) {
if response.result.is_err() {
guard.0 += 1;
}
} else {
guard.0 = 0;
}
guard.1 = Instant::now();
if response.result.is_ok() {
let res = std::mem::replace(&mut response.result, Err(INTERNAL_ERROR));
response.result = async {
let res = res?;
let login_res = from_value::<LoginRes>(res.clone())?;
self.set_cookie = Some(
HeaderValue::from_str(&format!(
"session={}; Path=/; SameSite=Lax; Expires=Fri, 31 Dec 9999 23:59:59 GMT;",
login_res.session
))
.with_kind(crate::ErrorKind::Network)?,
);
Ok(res)
}
.await;
}
}
}
async fn process_http_response(&mut self, _: &RpcContext, response: &mut Response) {
if let Some(set_cookie) = self.set_cookie.take() {
response.headers_mut().insert("set-cookie", set_cookie);
}
}
}

View File

@@ -1,61 +1,63 @@
use futures::FutureExt;
use http::HeaderValue;
use hyper::header::HeaderMap;
use rpc_toolkit::hyper::http::Error as HttpError;
use rpc_toolkit::hyper::{Body, Method, Request, Response};
use rpc_toolkit::rpc_server_helpers::{
DynMiddlewareStage2, DynMiddlewareStage3, DynMiddlewareStage4,
};
use rpc_toolkit::Metadata;
use axum::extract::Request;
use axum::response::Response;
use http::{HeaderMap, HeaderValue};
use rpc_toolkit::{Empty, Middleware};
fn get_cors_headers(req: &Request<Body>) -> HeaderMap {
let mut res = HeaderMap::new();
if let Some(origin) = req.headers().get("Origin") {
res.insert("Access-Control-Allow-Origin", origin.clone());
}
if let Some(method) = req.headers().get("Access-Control-Request-Method") {
res.insert("Access-Control-Allow-Methods", method.clone());
}
if let Some(headers) = req.headers().get("Access-Control-Request-Headers") {
res.insert("Access-Control-Allow-Headers", headers.clone());
}
res.insert(
"Access-Control-Allow-Credentials",
HeaderValue::from_static("true"),
);
res
#[derive(Clone)]
pub struct Cors {
headers: HeaderMap,
}
pub async fn cors<M: Metadata>(
req: &mut Request<Body>,
_metadata: M,
) -> Result<Result<DynMiddlewareStage2, Response<Body>>, HttpError> {
let headers = get_cors_headers(req);
if req.method() == Method::OPTIONS {
Ok(Err({
let mut res = Response::new(Body::empty());
res.headers_mut().extend(headers.into_iter());
res
}))
} else {
Ok(Ok(Box::new(|_, _| {
async move {
let res: DynMiddlewareStage3 = Box::new(|_, _| {
async move {
let res: DynMiddlewareStage4 = Box::new(|res| {
async move {
res.headers_mut().extend(headers.into_iter());
Ok::<_, HttpError>(())
}
.boxed()
});
Ok::<_, HttpError>(Ok(res))
}
.boxed()
});
Ok::<_, HttpError>(Ok(res))
}
.boxed()
})))
impl Cors {
pub fn new() -> Self {
let mut headers = HeaderMap::new();
headers.insert(
"Access-Control-Allow-Credentials",
HeaderValue::from_static("true"),
);
Self { headers }
}
fn get_cors_headers(&mut self, req: &Request) {
if let Some(origin) = req.headers().get("Origin") {
self.headers
.insert("Access-Control-Allow-Origin", origin.clone());
} else {
self.headers
.insert("Access-Control-Allow-Origin", HeaderValue::from_static("*"));
}
if let Some(method) = req.headers().get("Access-Control-Request-Method") {
self.headers
.insert("Access-Control-Allow-Methods", method.clone());
} else {
self.headers.insert(
"Access-Control-Allow-Methods",
HeaderValue::from_static("*"),
);
}
if let Some(headers) = req.headers().get("Access-Control-Request-Headers") {
self.headers
.insert("Access-Control-Allow-Headers", headers.clone());
} else {
self.headers.insert(
"Access-Control-Allow-Headers",
HeaderValue::from_static("*"),
);
}
}
}
#[async_trait::async_trait]
impl<Context: Send + 'static> Middleware<Context> for Cors {
type Metadata = Empty;
async fn process_http_request(
&mut self,
_: &Context,
request: &mut Request,
) -> Result<(), Response> {
self.get_cors_headers(request);
Ok(())
}
async fn process_http_response(&mut self, _: &Context, response: &mut Response) {
response
.headers_mut()
.extend(std::mem::take(&mut self.headers))
}
}

View File

@@ -1,50 +1,54 @@
use futures::future::BoxFuture;
use futures::FutureExt;
use axum::response::Response;
use http::header::InvalidHeaderValue;
use http::HeaderValue;
use rpc_toolkit::hyper::http::Error as HttpError;
use rpc_toolkit::hyper::{Body, Request, Response};
use rpc_toolkit::rpc_server_helpers::{
noop4, DynMiddleware, DynMiddlewareStage2, DynMiddlewareStage3,
};
use rpc_toolkit::yajrc::RpcMethod;
use rpc_toolkit::Metadata;
use rpc_toolkit::{Middleware, RpcRequest, RpcResponse};
use serde::Deserialize;
use crate::context::RpcContext;
pub fn db<M: Metadata>(ctx: RpcContext) -> DynMiddleware<M> {
Box::new(
move |_: &mut Request<Body>,
metadata: M|
-> BoxFuture<Result<Result<DynMiddlewareStage2, Response<Body>>, HttpError>> {
let ctx = ctx.clone();
async move {
let m2: DynMiddlewareStage2 = Box::new(move |_req, rpc_req| {
async move {
let sync_db = metadata
.get(rpc_req.method.as_str(), "sync_db")
.unwrap_or(false);
let m3: DynMiddlewareStage3 = Box::new(move |res, _| {
async move {
if sync_db {
res.headers.append(
"X-Patch-Sequence",
HeaderValue::from_str(
&ctx.db.sequence().await.to_string(),
)?,
);
}
Ok(Ok(noop4()))
}
.boxed()
});
Ok(Ok(m3))
}
.boxed()
});
Ok(Ok(m2))
}
.boxed()
},
)
#[derive(Deserialize)]
#[serde(rename_all = "kebab-case")]
pub struct Metadata {
#[serde(default)]
sync_db: bool,
}
#[derive(Clone)]
pub struct SyncDb {
sync_db: bool,
}
impl SyncDb {
pub fn new() -> Self {
SyncDb { sync_db: false }
}
}
#[async_trait::async_trait]
impl Middleware<RpcContext> for SyncDb {
type Metadata = Metadata;
async fn process_rpc_request(
&mut self,
_: &RpcContext,
metadata: Self::Metadata,
_: &mut RpcRequest,
) -> Result<(), RpcResponse> {
self.sync_db = metadata.sync_db;
Ok(())
}
async fn process_http_response(&mut self, context: &RpcContext, response: &mut Response) {
if let Err(e) = async {
if self.sync_db {
response.headers_mut().append(
"X-Patch-Sequence",
HeaderValue::from_str(&context.db.sequence().await.to_string())?,
);
}
Ok::<_, InvalidHeaderValue>(())
}
.await
{
tracing::error!("error writing X-Patch-Sequence header: {e}");
tracing::debug!("{e:?}");
}
}
}

View File

@@ -1,39 +1,43 @@
use futures::FutureExt;
use rpc_toolkit::hyper::http::Error as HttpError;
use rpc_toolkit::hyper::{Body, Request, Response};
use rpc_toolkit::rpc_server_helpers::{noop4, DynMiddlewareStage2, DynMiddlewareStage3};
use rpc_toolkit::yajrc::RpcMethod;
use rpc_toolkit::Metadata;
use rpc_toolkit::{Empty, Middleware, RpcRequest, RpcResponse};
use crate::Error;
use crate::context::DiagnosticContext;
use crate::prelude::*;
pub async fn diagnostic<M: Metadata>(
_req: &mut Request<Body>,
_metadata: M,
) -> Result<Result<DynMiddlewareStage2, Response<Body>>, HttpError> {
Ok(Ok(Box::new(|_, rpc_req| {
let method = rpc_req.method.as_str().to_owned();
async move {
let res: DynMiddlewareStage3 = Box::new(|_, rpc_res| {
async move {
if let Err(e) = rpc_res {
if e.code == -32601 {
*e = Error::new(
color_eyre::eyre::eyre!(
"{} is not available on the Diagnostic API",
method
),
crate::ErrorKind::DiagnosticMode,
)
.into();
}
}
Ok(Ok(noop4()))
}
.boxed()
});
Ok::<_, HttpError>(Ok(res))
}
.boxed()
})))
#[derive(Clone)]
pub struct DiagnosticMode {
method: Option<String>,
}
impl DiagnosticMode {
pub fn new() -> Self {
Self { method: None }
}
}
#[async_trait::async_trait]
impl Middleware<DiagnosticContext> for DiagnosticMode {
type Metadata = Empty;
async fn process_rpc_request(
&mut self,
_: &DiagnosticContext,
_: Self::Metadata,
request: &mut RpcRequest,
) -> Result<(), RpcResponse> {
self.method = Some(request.method.as_str().to_owned());
Ok(())
}
async fn process_rpc_response(&mut self, _: &DiagnosticContext, response: &mut RpcResponse) {
if let Err(e) = &mut response.result {
if e.code == -32601 {
*e = Error::new(
eyre!(
"{} is not available on the Diagnostic API",
self.method.as_ref().map(|s| s.as_str()).unwrap_or_default()
),
crate::ErrorKind::DiagnosticMode,
)
.into();
}
}
}
}

View File

@@ -1,115 +0,0 @@
use aes::cipher::{CipherKey, NewCipher, Nonce, StreamCipher};
use aes::Aes256Ctr;
use hmac::Hmac;
use josekit::jwk::Jwk;
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use tracing::instrument;
pub fn pbkdf2(password: impl AsRef<[u8]>, salt: impl AsRef<[u8]>) -> CipherKey<Aes256Ctr> {
let mut aeskey = CipherKey::<Aes256Ctr>::default();
pbkdf2::pbkdf2::<Hmac<Sha256>>(
password.as_ref(),
salt.as_ref(),
1000,
aeskey.as_mut_slice(),
)
.unwrap();
aeskey
}
pub fn encrypt_slice(input: impl AsRef<[u8]>, password: impl AsRef<[u8]>) -> Vec<u8> {
let prefix: [u8; 32] = rand::random();
let aeskey = pbkdf2(password.as_ref(), &prefix[16..]);
let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]);
let mut aes = Aes256Ctr::new(&aeskey, ctr);
let mut res = Vec::with_capacity(32 + input.as_ref().len());
res.extend_from_slice(&prefix[..]);
res.extend_from_slice(input.as_ref());
aes.apply_keystream(&mut res[32..]);
res
}
pub fn decrypt_slice(input: impl AsRef<[u8]>, password: impl AsRef<[u8]>) -> Vec<u8> {
if input.as_ref().len() < 32 {
return Vec::new();
}
let (prefix, rest) = input.as_ref().split_at(32);
let aeskey = pbkdf2(password.as_ref(), &prefix[16..]);
let ctr = Nonce::<Aes256Ctr>::from_slice(&prefix[..16]);
let mut aes = Aes256Ctr::new(&aeskey, ctr);
let mut res = rest.to_vec();
aes.apply_keystream(&mut res);
res
}
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct EncryptedWire {
encrypted: serde_json::Value,
}
impl EncryptedWire {
#[instrument(skip_all)]
pub fn decrypt(self, current_secret: impl AsRef<Jwk>) -> Option<String> {
let current_secret = current_secret.as_ref();
let decrypter = match josekit::jwe::alg::ecdh_es::EcdhEsJweAlgorithm::EcdhEs
.decrypter_from_jwk(current_secret)
{
Ok(a) => a,
Err(e) => {
tracing::warn!("Could not setup awk");
tracing::debug!("{:?}", e);
return None;
}
};
let encrypted = match serde_json::to_string(&self.encrypted) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Could not deserialize");
tracing::debug!("{:?}", e);
return None;
}
};
let (decoded, _) = match josekit::jwe::deserialize_json(&encrypted, &decrypter) {
Ok(a) => a,
Err(e) => {
tracing::warn!("Could not decrypt");
tracing::debug!("{:?}", e);
return None;
}
};
match String::from_utf8(decoded) {
Ok(a) => Some(a),
Err(e) => {
tracing::warn!("Could not decrypt into utf8");
tracing::debug!("{:?}", e);
return None;
}
}
}
}
/// We created this test by first making the private key, then restoring from this private key for recreatability.
/// After this the frontend then encoded an password, then we are testing that the output that we got (hand coded)
/// will be the shape we want.
#[test]
fn test_gen_awk() {
let private_key: Jwk = serde_json::from_str(
r#"{
"kty": "EC",
"crv": "P-256",
"d": "3P-MxbUJtEhdGGpBCRFXkUneGgdyz_DGZWfIAGSCHOU",
"x": "yHTDYSfjU809fkSv9MmN4wuojf5c3cnD7ZDN13n-jz4",
"y": "8Mpkn744A5KDag0DmX2YivB63srjbugYZzWc3JOpQXI"
}"#,
)
.unwrap();
let encrypted: EncryptedWire = serde_json::from_str(r#"{
"encrypted": { "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2IiwiYWxnIjoiRUNESC1FUyIsImtpZCI6ImgtZnNXUVh2Tm95dmJEazM5dUNsQ0NUdWc5N3MyZnJockJnWUVBUWVtclUiLCJlcGsiOnsia3R5IjoiRUMiLCJjcnYiOiJQLTI1NiIsIngiOiJmRkF0LXNWYWU2aGNkdWZJeUlmVVdUd3ZvWExaTkdKRHZIWVhIckxwOXNNIiwieSI6IjFvVFN6b00teHlFZC1SLUlBaUFHdXgzS1dJZmNYZHRMQ0JHLUh6MVkzY2sifX0", "iv": "NbwvfvWOdLpZfYRIZUrkcw", "ciphertext": "Zc5Br5kYOlhPkIjQKOLMJw", "tag": "EPoch52lDuCsbUUulzZGfg" }
}"#).unwrap();
assert_eq!(
"testing12345",
&encrypted.decrypt(std::sync::Arc::new(private_key)).unwrap()
);
}

View File

@@ -2,4 +2,3 @@ pub mod auth;
pub mod cors;
pub mod db;
pub mod diagnostic;
pub mod encrypt;

View File

@@ -1,141 +0,0 @@
use std::collections::BTreeSet;
use color_eyre::eyre::eyre;
use emver::VersionRange;
use futures::{Future, FutureExt};
use indexmap::IndexMap;
use models::ImageId;
use patch_db::HasModel;
use serde::{Deserialize, Serialize};
use tracing::instrument;
use crate::context::RpcContext;
use crate::prelude::*;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::Version;
use crate::volume::Volumes;
use crate::{Error, ResultExt};
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "kebab-case")]
#[model = "Model<Self>"]
pub struct Migrations {
pub from: IndexMap<VersionRange, PackageProcedure>,
pub to: IndexMap<VersionRange, PackageProcedure>,
}
impl Migrations {
#[instrument(skip_all)]
pub fn validate(
&self,
_container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
) -> Result<(), Error> {
for (version, migration) in &self.from {
migration
.validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| {
(
crate::ErrorKind::ValidateS9pk,
format!("Migration from {}", version),
)
})?;
}
for (version, migration) in &self.to {
migration
.validate(eos_version, volumes, image_ids, true)
.with_ctx(|_| {
(
crate::ErrorKind::ValidateS9pk,
format!("Migration to {}", version),
)
})?;
}
Ok(())
}
#[instrument(skip_all)]
pub fn from<'a>(
&'a self,
_container: &'a Option<DockerContainers>,
ctx: &'a RpcContext,
version: &'a Version,
pkg_id: &'a PackageId,
pkg_version: &'a Version,
volumes: &'a Volumes,
) -> Option<impl Future<Output = Result<MigrationRes, Error>> + 'a> {
if let Some((_, migration)) = self
.from
.iter()
.find(|(range, _)| version.satisfies(*range))
{
Some(async move {
migration
.execute(
ctx,
pkg_id,
pkg_version,
ProcedureName::Migration, // Migrations cannot be executed concurrently
volumes,
Some(version),
None,
)
.map(|r| {
r.and_then(|r| {
r.map_err(|e| {
Error::new(eyre!("{}", e.1), crate::ErrorKind::MigrationFailed)
})
})
})
.await
})
} else {
None
}
}
#[instrument(skip_all)]
pub fn to<'a>(
&'a self,
ctx: &'a RpcContext,
version: &'a Version,
pkg_id: &'a PackageId,
pkg_version: &'a Version,
volumes: &'a Volumes,
) -> Option<impl Future<Output = Result<MigrationRes, Error>> + 'a> {
if let Some((_, migration)) = self.to.iter().find(|(range, _)| version.satisfies(*range)) {
Some(async move {
migration
.execute(
ctx,
pkg_id,
pkg_version,
ProcedureName::Migration,
volumes,
Some(version),
None,
)
.map(|r| {
r.and_then(|r| {
r.map_err(|e| {
Error::new(eyre!("{}", e.1), crate::ErrorKind::MigrationFailed)
})
})
})
.await
})
} else {
None
}
}
}
#[derive(Clone, Debug, Default, Deserialize, Serialize, HasModel)]
#[serde(rename_all = "kebab-case")]
#[model = "Model<Self>"]
pub struct MigrationRes {
pub configured: bool,
}

View File

@@ -1,15 +1,16 @@
use std::collections::{BTreeMap, BTreeSet};
use std::net::IpAddr;
use clap::Parser;
use futures::TryStreamExt;
use rpc_toolkit::command;
use rpc_toolkit::{from_fn_async, HandlerExt, ParentHandler};
use serde::{Deserialize, Serialize};
use tokio::sync::RwLock;
use crate::context::RpcContext;
use crate::context::{CliContext, RpcContext};
use crate::db::model::IpInfo;
use crate::net::utils::{iface_is_physical, list_interfaces};
use crate::prelude::*;
use crate::util::display_none;
use crate::Error;
lazy_static::lazy_static! {
@@ -50,13 +51,26 @@ pub async fn init_ips() -> Result<BTreeMap<String, IpInfo>, Error> {
Ok(res)
}
#[command(subcommands(update))]
pub async fn dhcp() -> Result<(), Error> {
Ok(())
// #[command(subcommands(update))]
pub fn dhcp() -> ParentHandler {
ParentHandler::new().subcommand(
"update",
from_fn_async::<_, _, (), Error, (RpcContext, UpdateParams)>(update)
.no_display()
.with_remote_cli::<CliContext>(),
)
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct UpdateParams {
interface: String,
}
#[command(display(display_none))]
pub async fn update(#[context] ctx: RpcContext, #[arg] interface: String) -> Result<(), Error> {
pub async fn update(
ctx: RpcContext,
UpdateParams { interface }: UpdateParams,
) -> Result<(), Error> {
if iface_is_physical(&interface).await {
let ip_info = IpInfo::for_interface(&interface).await?;
ctx.db

View File

@@ -2,13 +2,13 @@ use std::collections::BTreeMap;
use indexmap::IndexSet;
pub use models::InterfaceId;
use models::PackageId;
use serde::{Deserialize, Deserializer, Serialize};
use sqlx::{Executor, Postgres};
use tracing::instrument;
use crate::db::model::{InterfaceAddressMap, InterfaceAddresses};
use crate::net::keys::Key;
use crate::s9pk::manifest::PackageId;
use crate::util::serde::Port;
use crate::{Error, ResultExt};

View File

@@ -1,21 +1,19 @@
use std::collections::BTreeMap;
use clap::ArgMatches;
use clap::Parser;
use color_eyre::eyre::eyre;
use models::{Id, InterfaceId, PackageId};
use openssl::pkey::{PKey, Private};
use openssl::sha::Sha256;
use openssl::x509::X509;
use p256::elliptic_curve::pkcs8::EncodePrivateKey;
use rpc_toolkit::command;
use serde::{Deserialize, Serialize};
use sqlx::{Acquire, PgExecutor};
use ssh_key::private::Ed25519PrivateKey;
use torut::onion::{OnionAddressV3, TorSecretKeyV3};
use zeroize::Zeroize;
use crate::config::{configure, ConfigureContext};
use crate::config::ConfigureContext;
use crate::context::RpcContext;
use crate::control::restart;
use crate::control::{restart, ControlParams};
use crate::disk::fsck::RequiresReboot;
use crate::net::ssl::CertPair;
use crate::prelude::*;
@@ -280,17 +278,23 @@ pub fn test_keygen() {
key.openssl_key_nistp256();
}
fn display_requires_reboot(arg: RequiresReboot, _matches: &ArgMatches) {
if arg.0 {
pub fn display_requires_reboot(_: RotateKeysParams, args: RequiresReboot) {
if args.0 {
println!("Server must be restarted for changes to take effect");
}
}
#[derive(Deserialize, Serialize, Parser)]
#[serde(rename_all = "kebab-case")]
#[command(rename_all = "kebab-case")]
pub struct RotateKeysParams {
package: Option<PackageId>,
interface: Option<InterfaceId>,
}
#[command(rename = "rotate-key", display(display_requires_reboot))]
// #[command(display(display_requires_reboot))]
pub async fn rotate_key(
#[context] ctx: RpcContext,
#[arg] package: Option<PackageId>,
#[arg] interface: Option<InterfaceId>,
ctx: RpcContext,
RotateKeysParams { package, interface }: RotateKeysParams,
) -> Result<RequiresReboot, Error> {
let mut pgcon = ctx.secret_store.acquire().await?;
let mut tx = pgcon.begin().await?;
@@ -337,37 +341,39 @@ pub async fn rotate_key(
lan.ser(&new_key.tor_address().to_string())?;
}
if installed
.as_manifest()
.as_config()
.transpose_ref()
.is_some()
{
installed
.as_status_mut()
.as_configured_mut()
.replace(&false)
} else {
Ok(false)
}
// TODO
// if installed
// .as_manifest()
// .as_config()
// .transpose_ref()
// .is_some()
// {
// installed
// .as_status_mut()
// .as_configured_mut()
// .replace(&false)
// } else {
// Ok(false)
// }
Ok(false)
})
.await?;
tx.commit().await?;
if needs_config {
configure(
&ctx,
&package,
ConfigureContext {
breakages: BTreeMap::new(),
timeout: None,
config: None,
overrides: BTreeMap::new(),
dry_run: false,
},
)
.await?;
ctx.services
.get(&package)
.await
.as_ref()
.ok_or_else(|| {
Error::new(
eyre!("There is no manager running for {package}"),
ErrorKind::Unknown,
)
})?
.configure(ConfigureContext::default())
.await?;
} else {
restart(ctx, package).await?;
restart(ctx, ControlParams { id: package }).await?;
}
Ok(RequiresReboot(false))
} else {

Some files were not shown because too many files have changed in this diff Show More