mirror of
https://github.com/Start9Labs/start-os.git
synced 2026-04-04 22:39:46 +00:00
Feature/lxc container runtime (#2514)
* wip: static-server errors * wip: fix wifi * wip: Fix the service_effects * wip: Fix cors in the middleware * wip(chore): Auth clean up the lint. * wip(fix): Vhost * wip: continue manager refactor Co-authored-by: J H <Blu-J@users.noreply.github.com> * wip: service manager refactor * wip: Some fixes * wip(fix): Fix the lib.rs * wip * wip(fix): Logs * wip: bins * wip(innspect): Add in the inspect * wip: config * wip(fix): Diagnostic * wip(fix): Dependencies * wip: context * wip(fix) Sorta auth * wip: warnings * wip(fix): registry/admin * wip(fix) marketplace * wip(fix) Some more converted and fixed with the linter and config * wip: Working on the static server * wip(fix)static server * wip: Remove some asynnc * wip: Something about the request and regular rpc * wip: gut install Co-authored-by: J H <Blu-J@users.noreply.github.com> * wip: Convert the static server into the new system * wip delete file * test * wip(fix) vhost does not need the with safe defaults * wip: Adding in the wifi * wip: Fix the developer and the verify * wip: new install flow Co-authored-by: J H <Blu-J@users.noreply.github.com> * fix middleware * wip * wip: Fix the auth * wip * continue service refactor * feature: Service get_config * feat: Action * wip: Fighting the great fight against the borrow checker * wip: Remove an error in a file that I just need to deel with later * chore: Add in some more lifetime stuff to the services * wip: Install fix on lifetime * cleanup * wip: Deal with the borrow later * more cleanup * resolve borrowchecker errors * wip(feat): add in the handler for the socket, for now * wip(feat): Update the service_effect_handler::action * chore: Add in the changes to make sure the from_service goes to context * chore: Change the * refactor service map * fix references to service map * fill out restore * wip: Before I work on the store stuff * fix backup module * handle some warnings * feat: add in the ui components on the rust side * feature: Update the procedures * chore: Update the js side of the main and a few of the others * chore: Update the rpc listener to match the persistant container * wip: Working on updating some things to have a better name * wip(feat): Try and get the rpc to return the correct shape? * lxc wip * wip(feat): Try and get the rpc to return the correct shape? * build for container runtime wip * remove container-init * fix build * fix error * chore: Update to work I suppose * lxc wip * remove docker module and feature * download alpine squashfs automatically * overlays effect Co-authored-by: Jade <Blu-J@users.noreply.github.com> * chore: Add the overlay effect * feat: Add the mounter in the main * chore: Convert to use the mounts, still need to work with the sandbox * install fixes * fix ssl * fixes from testing * implement tmpfile for upload * wip * misc fixes * cleanup * cleanup * better progress reporting * progress for sideload * return real guid * add devmode script * fix lxc rootfs path * fix percentage bar * fix progress bar styling * fix build for unstable * tweaks * label progress * tweaks * update progress more often * make symlink in rpc_client * make socket dir * fix parent path * add start-cli to container * add echo and gitInfo commands * wip: Add the init + errors * chore: Add in the exit effect for the system * chore: Change the type to null for failure to parse * move sigterm timeout to stopping status * update order * chore: Update the return type * remove dbg * change the map error * chore: Update the thing to capture id * chore add some life changes * chore: Update the loging * chore: Update the package to run module * us From for RpcError * chore: Update to use import instead * chore: update * chore: Use require for the backup * fix a default * update the type that is wrong * chore: Update the type of the manifest * chore: Update to make null * only symlink if not exists * get rid of double result * better debug info for ErrorCollection * chore: Update effects * chore: fix * mount assets and volumes * add exec instead of spawn * fix mounting in image * fix overlay mounts Co-authored-by: Jade <Blu-J@users.noreply.github.com> * misc fixes * feat: Fix two * fix: systemForEmbassy main * chore: Fix small part of main loop * chore: Modify the bundle * merge * fixMain loop" * move tsc to makefile * chore: Update the return types of the health check * fix client * chore: Convert the todo to use tsmatches * add in the fixes for the seen and create the hack to allow demo * chore: Update to include the systemForStartOs * chore UPdate to the latest types from the expected outout * fixes * fix typo * Don't emit if failure on tsc * wip Co-authored-by: Jade <Blu-J@users.noreply.github.com> * add s9pk api * add inspection * add inspect manifest * newline after display serializable * fix squashfs in image name * edit manifest Co-authored-by: Jade <Blu-J@users.noreply.github.com> * wait for response on repl * ignore sig for now * ignore sig for now * re-enable sig verification * fix * wip * env and chroot * add profiling logs * set uid & gid in squashfs to 100000 * set uid of sqfs to 100000 * fix mksquashfs args * add env to compat * fix * re-add docker feature flag * fix docker output format being stupid * here be dragons * chore: Add in the cross compiling for something * fix npm link * extract logs from container on exit * chore: Update for testing * add log capture to drop trait * chore: add in the modifications that I make * chore: Update small things for no updates * chore: Update the types of something * chore: Make main not complain * idmapped mounts * idmapped volumes * re-enable kiosk * chore: Add in some logging for the new system * bring in start-sdk * remove avahi * chore: Update the deps * switch to musl * chore: Update the version of prettier * chore: Organize' * chore: Update some of the headers back to the standard of fetch * fix musl build * fix idmapped mounts * fix cross build * use cross compiler for correct arch * feat: Add in the faked ssl stuff for the effects * @dr_bonez Did a solution here * chore: Something that DrBonez * chore: up * wip: We have a working server!!! * wip * uninstall * wip * tes --------- Co-authored-by: J H <dragondef@gmail.com> Co-authored-by: J H <Blu-J@users.noreply.github.com> Co-authored-by: J H <2364004+Blu-J@users.noreply.github.com>
This commit is contained in:
66
core/startos/src/service/cli.rs
Normal file
66
core/startos/src/service/cli.rs
Normal file
@@ -0,0 +1,66 @@
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::sync::Arc;
|
||||
|
||||
use clap::Parser;
|
||||
use imbl_value::Value;
|
||||
use once_cell::sync::OnceCell;
|
||||
use rpc_toolkit::yajrc::RpcError;
|
||||
use rpc_toolkit::{call_remote_socket, yajrc, CallRemote, Context};
|
||||
use tokio::runtime::Runtime;
|
||||
|
||||
use crate::lxc::HOST_RPC_SERVER_SOCKET;
|
||||
|
||||
#[derive(Debug, Default, Parser)]
|
||||
pub struct ContainerClientConfig {
|
||||
#[arg(long = "socket")]
|
||||
pub socket: Option<PathBuf>,
|
||||
}
|
||||
|
||||
pub struct ContainerCliSeed {
|
||||
socket: PathBuf,
|
||||
runtime: OnceCell<Runtime>,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct ContainerCliContext(Arc<ContainerCliSeed>);
|
||||
impl ContainerCliContext {
|
||||
pub fn init(cfg: ContainerClientConfig) -> Self {
|
||||
Self(Arc::new(ContainerCliSeed {
|
||||
socket: cfg
|
||||
.socket
|
||||
.unwrap_or_else(|| Path::new("/").join(HOST_RPC_SERVER_SOCKET)),
|
||||
runtime: OnceCell::new(),
|
||||
}))
|
||||
}
|
||||
}
|
||||
impl Context for ContainerCliContext {
|
||||
fn runtime(&self) -> tokio::runtime::Handle {
|
||||
self.0
|
||||
.runtime
|
||||
.get_or_init(|| {
|
||||
tokio::runtime::Builder::new_current_thread()
|
||||
.enable_all()
|
||||
.build()
|
||||
.unwrap()
|
||||
})
|
||||
.handle()
|
||||
.clone()
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait::async_trait]
|
||||
impl CallRemote for ContainerCliContext {
|
||||
async fn call_remote(&self, method: &str, params: Value) -> Result<Value, RpcError> {
|
||||
call_remote_socket(
|
||||
tokio::net::UnixStream::connect(&self.0.socket)
|
||||
.await
|
||||
.map_err(|e| RpcError {
|
||||
data: Some(e.to_string().into()),
|
||||
..yajrc::INTERNAL_ERROR
|
||||
})?,
|
||||
method,
|
||||
params,
|
||||
)
|
||||
.await
|
||||
}
|
||||
}
|
||||
22
core/startos/src/service/config.rs
Normal file
22
core/startos/src/service/config.rs
Normal file
@@ -0,0 +1,22 @@
|
||||
use std::collections::BTreeMap;
|
||||
|
||||
use models::PackageId;
|
||||
|
||||
use crate::config::ConfigureContext;
|
||||
use crate::prelude::*;
|
||||
use crate::service::Service;
|
||||
|
||||
impl Service {
|
||||
pub async fn configure(
|
||||
&self,
|
||||
ConfigureContext {
|
||||
breakages,
|
||||
timeout,
|
||||
config,
|
||||
overrides,
|
||||
dry_run,
|
||||
}: ConfigureContext,
|
||||
) -> Result<BTreeMap<PackageId, String>, Error> {
|
||||
todo!()
|
||||
}
|
||||
}
|
||||
45
core/startos/src/service/control.rs
Normal file
45
core/startos/src/service/control.rs
Normal file
@@ -0,0 +1,45 @@
|
||||
use crate::prelude::*;
|
||||
use crate::service::start_stop::StartStop;
|
||||
use crate::service::transition::TransitionKind;
|
||||
use crate::service::{Service, ServiceActor};
|
||||
use crate::util::actor::{BackgroundJobs, Handler};
|
||||
|
||||
struct Start;
|
||||
#[async_trait::async_trait]
|
||||
impl Handler<Start> for ServiceActor {
|
||||
type Response = ();
|
||||
async fn handle(&mut self, _: Start, _: &mut BackgroundJobs) -> Self::Response {
|
||||
self.0.desired_state.send_replace(StartStop::Start);
|
||||
self.0.synchronized.notified().await
|
||||
}
|
||||
}
|
||||
impl Service {
|
||||
pub async fn start(&self) -> Result<(), Error> {
|
||||
self.actor.send(Start).await
|
||||
}
|
||||
}
|
||||
|
||||
struct Stop;
|
||||
#[async_trait::async_trait]
|
||||
impl Handler<Stop> for ServiceActor {
|
||||
type Response = ();
|
||||
async fn handle(&mut self, _: Stop, _: &mut BackgroundJobs) -> Self::Response {
|
||||
self.0.desired_state.send_replace(StartStop::Stop);
|
||||
if self.0.transition_state.borrow().as_ref().map(|t| t.kind())
|
||||
== Some(TransitionKind::Restarting)
|
||||
{
|
||||
if let Some(restart) = self.0.transition_state.send_replace(None) {
|
||||
restart.abort().await;
|
||||
} else {
|
||||
#[cfg(feature = "unstable")]
|
||||
unreachable!()
|
||||
}
|
||||
}
|
||||
self.0.synchronized.notified().await
|
||||
}
|
||||
}
|
||||
impl Service {
|
||||
pub async fn stop(&self) -> Result<(), Error> {
|
||||
self.actor.send(Stop).await
|
||||
}
|
||||
}
|
||||
5
core/startos/src/service/fake.cert.key
Normal file
5
core/startos/src/service/fake.cert.key
Normal file
@@ -0,0 +1,5 @@
|
||||
-----BEGIN EC PRIVATE KEY-----
|
||||
MHcCAQEEINn5jiv9VFgEwdUJsDksSTAjPKwkl2DCmCmumu4D1GnNoAoGCCqGSM49
|
||||
AwEHoUQDQgAE5KuqP+Wdn8pzmNMxK2hya6mKj1H0j5b47y97tIXqf5ajTi8koRPl
|
||||
yao3YcqdtBtN37aw4rVlXVwEJIozZgyiyA==
|
||||
-----END EC PRIVATE KEY-----
|
||||
13
core/startos/src/service/fake.cert.pem
Normal file
13
core/startos/src/service/fake.cert.pem
Normal file
@@ -0,0 +1,13 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIB9DCCAZmgAwIBAgIUIWsFiA8JqIqeUo+Psn91oCQIcdwwCgYIKoZIzj0EAwIw
|
||||
TzELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkNPMRowGAYDVQQKDBFTdGFydDkgTGFi
|
||||
cywgSW5jLjEXMBUGA1UEAwwOZmFrZW5hbWUubG9jYWwwHhcNMjQwMjE0MTk1MTUz
|
||||
WhcNMjUwMjEzMTk1MTUzWjBPMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQ08xGjAY
|
||||
BgNVBAoMEVN0YXJ0OSBMYWJzLCBJbmMuMRcwFQYDVQQDDA5mYWtlbmFtZS5sb2Nh
|
||||
bDBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABOSrqj/lnZ/Kc5jTMStocmupio9R
|
||||
9I+W+O8ve7SF6n+Wo04vJKET5cmqN2HKnbQbTd+2sOK1ZV1cBCSKM2YMosijUzBR
|
||||
MB0GA1UdDgQWBBR+qd4W//H34Eg90yAPjYz3nZK79DAfBgNVHSMEGDAWgBR+qd4W
|
||||
//H34Eg90yAPjYz3nZK79DAPBgNVHRMBAf8EBTADAQH/MAoGCCqGSM49BAMCA0kA
|
||||
MEYCIQDNSN9YWkGbntG+nC+NzEyqE9FcvYZ8TaF3sOnthqSVKwIhAM2N+WJG/p4C
|
||||
cPl4HSPPgDaOIhVZzxSje2ycb7wvFtpH
|
||||
-----END CERTIFICATE-----
|
||||
542
core/startos/src/service/mod.rs
Normal file
542
core/startos/src/service/mod.rs
Normal file
@@ -0,0 +1,542 @@
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use chrono::{DateTime, Utc};
|
||||
use clap::Parser;
|
||||
use futures::future::BoxFuture;
|
||||
use imbl::OrdMap;
|
||||
use models::{ActionId, HealthCheckId, PackageId, ProcedureName};
|
||||
use persistent_container::PersistentContainer;
|
||||
use rpc_toolkit::{from_fn_async, CallRemoteHandler, Handler, HandlerArgs};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use start_stop::StartStop;
|
||||
use tokio::sync::{watch, Notify};
|
||||
|
||||
use crate::action::ActionResult;
|
||||
use crate::config::action::ConfigRes;
|
||||
use crate::context::{CliContext, RpcContext};
|
||||
use crate::core::rpc_continuations::RequestGuid;
|
||||
use crate::db::model::{
|
||||
CurrentDependencies, CurrentDependents, InstalledPackageInfo, PackageDataEntry,
|
||||
PackageDataEntryInstalled, PackageDataEntryMatchModel, StaticFiles,
|
||||
};
|
||||
use crate::disk::mount::guard::GenericMountGuard;
|
||||
use crate::install::PKG_ARCHIVE_DIR;
|
||||
use crate::prelude::*;
|
||||
use crate::progress::{self, NamedProgress, Progress};
|
||||
use crate::s9pk::S9pk;
|
||||
use crate::service::service_map::InstallProgressHandles;
|
||||
use crate::service::transition::{TempDesiredState, TransitionKind, TransitionState};
|
||||
use crate::status::health_check::HealthCheckResult;
|
||||
use crate::status::{DependencyConfigErrors, MainStatus, Status};
|
||||
use crate::util::actor::{Actor, BackgroundJobs, SimpleActor};
|
||||
use crate::volume::data_dir;
|
||||
|
||||
pub mod cli;
|
||||
mod config;
|
||||
mod control;
|
||||
pub mod persistent_container;
|
||||
mod rpc;
|
||||
pub mod service_effect_handler;
|
||||
pub mod service_map;
|
||||
mod start_stop;
|
||||
mod transition;
|
||||
mod util;
|
||||
|
||||
pub use service_map::ServiceMap;
|
||||
|
||||
pub const HEALTH_CHECK_COOLDOWN_SECONDS: u64 = 15;
|
||||
pub const HEALTH_CHECK_GRACE_PERIOD_SECONDS: u64 = 5;
|
||||
pub const SYNC_RETRY_COOLDOWN_SECONDS: u64 = 10;
|
||||
|
||||
pub type Task<'a> = BoxFuture<'a, Result<(), Error>>;
|
||||
|
||||
/// TODO
|
||||
pub enum BackupReturn {
|
||||
TODO,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum LoadDisposition {
|
||||
Retry,
|
||||
Undo,
|
||||
}
|
||||
|
||||
pub struct Service {
|
||||
actor: SimpleActor<ServiceActor>,
|
||||
seed: Arc<ServiceActorSeed>,
|
||||
}
|
||||
impl Service {
|
||||
#[instrument(skip_all)]
|
||||
async fn new(ctx: RpcContext, s9pk: S9pk, start: StartStop) -> Result<Self, Error> {
|
||||
let id = s9pk.as_manifest().id.clone();
|
||||
let desired_state = watch::channel(start).0;
|
||||
let temp_desired_state = TempDesiredState(Arc::new(watch::channel(None).0));
|
||||
let persistent_container = PersistentContainer::new(
|
||||
&ctx,
|
||||
s9pk,
|
||||
// desired_state.subscribe(),
|
||||
// temp_desired_state.subscribe(),
|
||||
)
|
||||
.await?;
|
||||
let seed = Arc::new(ServiceActorSeed {
|
||||
id,
|
||||
running_status: persistent_container.running_status.subscribe(),
|
||||
persistent_container,
|
||||
ctx,
|
||||
desired_state,
|
||||
temp_desired_state,
|
||||
transition_state: Arc::new(watch::channel(None).0),
|
||||
synchronized: Arc::new(Notify::new()),
|
||||
});
|
||||
seed.persistent_container
|
||||
.init(Arc::downgrade(&seed))
|
||||
.await?;
|
||||
Ok(Self {
|
||||
actor: SimpleActor::new(ServiceActor(seed.clone())),
|
||||
seed,
|
||||
})
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn load(
|
||||
ctx: &RpcContext,
|
||||
id: &PackageId,
|
||||
disposition: LoadDisposition,
|
||||
) -> Result<Option<Self>, Error> {
|
||||
let handle_installed = {
|
||||
let ctx = ctx.clone();
|
||||
move |s9pk: S9pk, i: Model<InstalledPackageInfo>| async move {
|
||||
for volume_id in &s9pk.as_manifest().volumes {
|
||||
let tmp_path =
|
||||
data_dir(&ctx.datadir, &s9pk.as_manifest().id.clone(), volume_id);
|
||||
if tokio::fs::metadata(&tmp_path).await.is_err() {
|
||||
tokio::fs::create_dir_all(&tmp_path).await?;
|
||||
}
|
||||
}
|
||||
let start_stop = if i.as_status().as_main().de()?.running() {
|
||||
StartStop::Start
|
||||
} else {
|
||||
StartStop::Stop
|
||||
};
|
||||
Self::new(ctx, s9pk, start_stop).await.map(Some)
|
||||
}
|
||||
};
|
||||
let s9pk_dir = ctx.datadir.join(PKG_ARCHIVE_DIR).join("installed"); // TODO: make this based on hash
|
||||
let s9pk_path = s9pk_dir.join(id).with_extension("s9pk");
|
||||
match ctx
|
||||
.db
|
||||
.peek()
|
||||
.await
|
||||
.into_package_data()
|
||||
.into_idx(id)
|
||||
.map(|pde| pde.into_match())
|
||||
{
|
||||
Some(PackageDataEntryMatchModel::Installing(_)) => {
|
||||
if disposition == LoadDisposition::Retry {
|
||||
if let Ok(s9pk) = S9pk::open(s9pk_path, Some(id)).await.map_err(|e| {
|
||||
tracing::error!("Error opening s9pk for install: {e}");
|
||||
tracing::debug!("{e:?}")
|
||||
}) {
|
||||
if let Ok(service) = Self::install(ctx.clone(), s9pk, None, None)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
tracing::error!("Error installing service: {e}");
|
||||
tracing::debug!("{e:?}")
|
||||
})
|
||||
{
|
||||
return Ok(Some(service));
|
||||
}
|
||||
}
|
||||
}
|
||||
// TODO: delete s9pk?
|
||||
ctx.db
|
||||
.mutate(|v| v.as_package_data_mut().remove(id))
|
||||
.await?;
|
||||
Ok(None)
|
||||
}
|
||||
Some(PackageDataEntryMatchModel::Updating(e)) => {
|
||||
if disposition == LoadDisposition::Retry
|
||||
&& e.as_install_progress().de()?.phases.iter().any(
|
||||
|NamedProgress { name, progress }| {
|
||||
name.eq_ignore_ascii_case("download")
|
||||
&& progress == &Progress::Complete(true)
|
||||
},
|
||||
)
|
||||
{
|
||||
if let Ok(s9pk) = S9pk::open(&s9pk_path, Some(id)).await.map_err(|e| {
|
||||
tracing::error!("Error opening s9pk for update: {e}");
|
||||
tracing::debug!("{e:?}")
|
||||
}) {
|
||||
if let Ok(service) = Self::install(
|
||||
ctx.clone(),
|
||||
s9pk,
|
||||
Some(e.as_installed().as_manifest().as_version().de()?),
|
||||
None,
|
||||
)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
tracing::error!("Error installing service: {e}");
|
||||
tracing::debug!("{e:?}")
|
||||
}) {
|
||||
return Ok(Some(service));
|
||||
}
|
||||
}
|
||||
}
|
||||
let s9pk = S9pk::open(s9pk_path, Some(id)).await?;
|
||||
ctx.db
|
||||
.mutate({
|
||||
let manifest = s9pk.as_manifest().clone();
|
||||
|db| {
|
||||
db.as_package_data_mut()
|
||||
.as_idx_mut(&manifest.id)
|
||||
.or_not_found(&manifest.id)?
|
||||
.ser(&PackageDataEntry::Installed(PackageDataEntryInstalled {
|
||||
static_files: e.as_static_files().de()?,
|
||||
manifest,
|
||||
installed: e.as_installed().de()?,
|
||||
}))
|
||||
}
|
||||
})
|
||||
.await?;
|
||||
handle_installed(s9pk, e.as_installed().clone()).await
|
||||
}
|
||||
Some(PackageDataEntryMatchModel::Removing(_))
|
||||
| Some(PackageDataEntryMatchModel::Restoring(_)) => {
|
||||
if let Ok(s9pk) = S9pk::open(s9pk_path, Some(id)).await.map_err(|e| {
|
||||
tracing::error!("Error opening s9pk for removal: {e}");
|
||||
tracing::debug!("{e:?}")
|
||||
}) {
|
||||
if let Ok(service) = Self::new(ctx.clone(), s9pk, StartStop::Stop)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
tracing::error!("Error loading service for removal: {e}");
|
||||
tracing::debug!("{e:?}")
|
||||
})
|
||||
{
|
||||
if service
|
||||
.uninstall(None)
|
||||
.await
|
||||
.map_err(|e| {
|
||||
tracing::error!("Error uninstalling service: {e}");
|
||||
tracing::debug!("{e:?}")
|
||||
})
|
||||
.is_ok()
|
||||
{
|
||||
return Ok(None);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ctx.db
|
||||
.mutate(|v| v.as_package_data_mut().remove(id))
|
||||
.await?;
|
||||
|
||||
Ok(None)
|
||||
}
|
||||
Some(PackageDataEntryMatchModel::Installed(i)) => {
|
||||
handle_installed(
|
||||
S9pk::open(s9pk_path, Some(id)).await?,
|
||||
i.as_installed().clone(),
|
||||
)
|
||||
.await
|
||||
}
|
||||
Some(PackageDataEntryMatchModel::Error(e)) => Err(Error::new(
|
||||
eyre!("Failed to parse PackageDataEntry, found {e:?}"),
|
||||
ErrorKind::Deserialization,
|
||||
)),
|
||||
None => Ok(None),
|
||||
}
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn install(
|
||||
ctx: RpcContext,
|
||||
s9pk: S9pk,
|
||||
src_version: Option<models::Version>,
|
||||
progress: Option<InstallProgressHandles>,
|
||||
) -> Result<Self, Error> {
|
||||
let manifest = s9pk.as_manifest().clone();
|
||||
let developer_key = s9pk.as_archive().signer();
|
||||
let icon = s9pk.icon_data_url().await?;
|
||||
let static_files = StaticFiles::local(&manifest.id, &manifest.version, icon);
|
||||
let service = Self::new(ctx.clone(), s9pk, StartStop::Stop).await?;
|
||||
service
|
||||
.seed
|
||||
.persistent_container
|
||||
.execute(ProcedureName::Init, to_value(&src_version)?, None) // TODO timeout
|
||||
.await
|
||||
.with_kind(ErrorKind::MigrationFailed)?; // TODO: handle cancellation
|
||||
if let Some(mut progress) = progress {
|
||||
progress.finalization_progress.complete();
|
||||
progress.progress_handle.complete();
|
||||
tokio::task::yield_now().await;
|
||||
}
|
||||
ctx.db
|
||||
.mutate(|d| {
|
||||
d.as_package_data_mut()
|
||||
.as_idx_mut(&manifest.id)
|
||||
.or_not_found(&manifest.id)?
|
||||
.ser(&PackageDataEntry::Installed(PackageDataEntryInstalled {
|
||||
installed: InstalledPackageInfo {
|
||||
current_dependencies: Default::default(), // TODO
|
||||
current_dependents: Default::default(), // TODO
|
||||
dependency_info: Default::default(), // TODO
|
||||
developer_key,
|
||||
status: Status {
|
||||
configured: false, // TODO
|
||||
main: MainStatus::Stopped, // TODO
|
||||
dependency_config_errors: Default::default(), // TODO
|
||||
},
|
||||
interface_addresses: Default::default(), // TODO
|
||||
marketplace_url: None, // TODO
|
||||
manifest: manifest.clone(),
|
||||
last_backup: None, // TODO
|
||||
store: Value::Null, // TODO
|
||||
store_exposed_dependents: Default::default(), // TODO
|
||||
store_exposed_ui: Default::default(), // TODO
|
||||
},
|
||||
manifest,
|
||||
static_files,
|
||||
}))
|
||||
})
|
||||
.await?;
|
||||
Ok(service)
|
||||
}
|
||||
|
||||
pub async fn restore(
|
||||
ctx: RpcContext,
|
||||
s9pk: S9pk,
|
||||
guard: impl GenericMountGuard,
|
||||
progress: Option<InstallProgressHandles>,
|
||||
) -> Result<Self, Error> {
|
||||
// TODO
|
||||
Err(Error::new(eyre!("not yet implemented"), ErrorKind::Unknown))
|
||||
}
|
||||
|
||||
pub async fn get_config(&self) -> Result<ConfigRes, Error> {
|
||||
let container = &self.seed.persistent_container;
|
||||
container
|
||||
.execute::<ConfigRes>(
|
||||
ProcedureName::GetConfig,
|
||||
Value::Null,
|
||||
Some(Duration::from_secs(30)), // TODO timeout
|
||||
)
|
||||
.await
|
||||
.with_kind(ErrorKind::ConfigGen)
|
||||
}
|
||||
|
||||
// TODO DO the Action Get
|
||||
|
||||
pub async fn action(&self, id: ActionId, input: Value) -> Result<ActionResult, Error> {
|
||||
let container = &self.seed.persistent_container;
|
||||
container
|
||||
.execute::<ActionResult>(
|
||||
ProcedureName::RunAction(id),
|
||||
input,
|
||||
Some(Duration::from_secs(30)),
|
||||
)
|
||||
.await
|
||||
.with_kind(ErrorKind::Action)
|
||||
}
|
||||
|
||||
pub async fn shutdown(self) -> Result<(), Error> {
|
||||
self.actor
|
||||
.shutdown(crate::util::actor::PendingMessageStrategy::FinishAll { timeout: None }) // TODO timeout
|
||||
.await;
|
||||
if let Some((hdl, shutdown)) = self.seed.persistent_container.rpc_server.send_replace(None)
|
||||
{
|
||||
shutdown.shutdown();
|
||||
hdl.await.with_kind(ErrorKind::Cancelled)?;
|
||||
}
|
||||
Arc::try_unwrap(self.seed)
|
||||
.map_err(|_| {
|
||||
Error::new(
|
||||
eyre!("ServiceActorSeed held somewhere after actor shutdown"),
|
||||
ErrorKind::Unknown,
|
||||
)
|
||||
})?
|
||||
.persistent_container
|
||||
.exit()
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn uninstall(self, target_version: Option<models::Version>) -> Result<(), Error> {
|
||||
self.seed
|
||||
.persistent_container
|
||||
.execute(ProcedureName::Uninit, to_value(&target_version)?, None) // TODO timeout
|
||||
.await?;
|
||||
self.shutdown().await
|
||||
}
|
||||
pub async fn backup(&self, guard: impl GenericMountGuard) -> Result<BackupReturn, Error> {
|
||||
// TODO
|
||||
Err(Error::new(eyre!("not yet implemented"), ErrorKind::Unknown))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
struct RunningStatus {
|
||||
health: OrdMap<HealthCheckId, HealthCheckResult>,
|
||||
started: DateTime<Utc>,
|
||||
}
|
||||
|
||||
pub(self) struct ServiceActorSeed {
|
||||
ctx: RpcContext,
|
||||
id: PackageId,
|
||||
persistent_container: PersistentContainer,
|
||||
desired_state: watch::Sender<StartStop>,
|
||||
temp_desired_state: TempDesiredState,
|
||||
transition_state: Arc<watch::Sender<Option<TransitionState>>>,
|
||||
running_status: watch::Receiver<Option<RunningStatus>>,
|
||||
synchronized: Arc<Notify>,
|
||||
}
|
||||
|
||||
struct ServiceActor(Arc<ServiceActorSeed>);
|
||||
impl Actor for ServiceActor {
|
||||
fn init(&mut self, jobs: &mut BackgroundJobs) {
|
||||
let seed = self.0.clone();
|
||||
jobs.add_job(async move {
|
||||
let id = seed.id.clone();
|
||||
let mut current = seed.persistent_container.current_state.subscribe();
|
||||
let mut desired = seed.desired_state.subscribe();
|
||||
let mut temp_desired = seed.temp_desired_state.subscribe();
|
||||
let mut transition = seed.transition_state.subscribe();
|
||||
let mut running = seed.running_status.clone();
|
||||
loop {
|
||||
let (desired_state, current_state, transition_kind, running_status) = (
|
||||
temp_desired.borrow().unwrap_or(*desired.borrow()),
|
||||
*current.borrow(),
|
||||
transition.borrow().as_ref().map(|t| t.kind()),
|
||||
running.borrow().clone(),
|
||||
);
|
||||
|
||||
if let Err(e) = async {
|
||||
seed.ctx
|
||||
.db
|
||||
.mutate(|d| {
|
||||
if let Some(i) = d
|
||||
.as_package_data_mut()
|
||||
.as_idx_mut(&id)
|
||||
.and_then(|p| p.as_installed_mut())
|
||||
{
|
||||
i.as_status_mut().as_main_mut().ser(&match (
|
||||
transition_kind,
|
||||
desired_state,
|
||||
current_state,
|
||||
running_status,
|
||||
) {
|
||||
(Some(TransitionKind::Restarting), _, _, _) => {
|
||||
MainStatus::Restarting
|
||||
}
|
||||
(Some(TransitionKind::BackingUp), _, _, Some(status)) => {
|
||||
MainStatus::BackingUp {
|
||||
started: Some(status.started),
|
||||
health: status.health.clone(),
|
||||
}
|
||||
}
|
||||
(Some(TransitionKind::BackingUp), _, _, None) => {
|
||||
MainStatus::BackingUp {
|
||||
started: None,
|
||||
health: OrdMap::new(),
|
||||
}
|
||||
}
|
||||
(None, StartStop::Stop, StartStop::Stop, _) => {
|
||||
MainStatus::Stopped
|
||||
}
|
||||
(None, StartStop::Stop, StartStop::Start, _) => {
|
||||
MainStatus::Stopping {
|
||||
timeout: todo!("sigterm timeout"),
|
||||
}
|
||||
}
|
||||
(None, StartStop::Start, StartStop::Stop, _) => {
|
||||
MainStatus::Starting
|
||||
}
|
||||
(None, StartStop::Start, StartStop::Start, None) => {
|
||||
MainStatus::Starting
|
||||
}
|
||||
(None, StartStop::Start, StartStop::Start, Some(status)) => {
|
||||
MainStatus::Running {
|
||||
started: status.started,
|
||||
health: status.health.clone(),
|
||||
}
|
||||
}
|
||||
})?;
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
.await?;
|
||||
match (desired_state, current_state) {
|
||||
(StartStop::Start, StartStop::Stop) => {
|
||||
seed.persistent_container.start().await
|
||||
}
|
||||
(StartStop::Stop, StartStop::Start) => {
|
||||
seed.persistent_container
|
||||
.stop(todo!("s9pk sigterm timeout"))
|
||||
.await
|
||||
}
|
||||
_ => Ok(()),
|
||||
}
|
||||
}
|
||||
.await
|
||||
{
|
||||
tracing::error!("error synchronizing state of service: {e}");
|
||||
tracing::debug!("{e:?}");
|
||||
|
||||
seed.synchronized.notify_waiters();
|
||||
|
||||
tracing::error!("Retrying in {}s...", SYNC_RETRY_COOLDOWN_SECONDS);
|
||||
tokio::time::sleep(Duration::from_secs(SYNC_RETRY_COOLDOWN_SECONDS)).await;
|
||||
continue;
|
||||
}
|
||||
|
||||
seed.synchronized.notify_waiters();
|
||||
|
||||
tokio::select! {
|
||||
_ = current.changed() => (),
|
||||
_ = desired.changed() => (),
|
||||
_ = temp_desired.changed() => (),
|
||||
_ = transition.changed() => (),
|
||||
_ = running.changed() => (),
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize, Serialize, Parser)]
|
||||
pub struct ConnectParams {
|
||||
pub id: PackageId,
|
||||
}
|
||||
|
||||
pub async fn connect_rpc(
|
||||
ctx: RpcContext,
|
||||
ConnectParams { id }: ConnectParams,
|
||||
) -> Result<RequestGuid, Error> {
|
||||
let id_ref = &id;
|
||||
crate::lxc::connect(
|
||||
&ctx,
|
||||
ctx.services
|
||||
.get(&id)
|
||||
.await
|
||||
.as_ref()
|
||||
.or_not_found(lazy_format!("service for {id_ref}"))?
|
||||
.seed
|
||||
.persistent_container
|
||||
.lxc_container
|
||||
.get()
|
||||
.or_not_found(lazy_format!("container for {id_ref}"))?,
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn connect_rpc_cli(
|
||||
handle_args: HandlerArgs<CliContext, ConnectParams>,
|
||||
) -> Result<(), Error> {
|
||||
let ctx = handle_args.context.clone();
|
||||
let guid = CallRemoteHandler::<CliContext, _>::new(from_fn_async(connect_rpc))
|
||||
.handle_async(handle_args)
|
||||
.await?;
|
||||
|
||||
crate::lxc::connect_cli(&ctx, guid).await
|
||||
}
|
||||
365
core/startos/src/service/persistent_container.rs
Normal file
365
core/startos/src/service/persistent_container.rs
Normal file
@@ -0,0 +1,365 @@
|
||||
use std::collections::BTreeMap;
|
||||
use std::path::Path;
|
||||
use std::sync::{Arc, Weak};
|
||||
use std::time::Duration;
|
||||
|
||||
use futures::future::ready;
|
||||
use futures::Future;
|
||||
use helpers::NonDetachingJoinHandle;
|
||||
use imbl_value::InternedString;
|
||||
use models::{ProcedureName, VolumeId};
|
||||
use rpc_toolkit::{Empty, Server, ShutdownHandle};
|
||||
use serde::de::DeserializeOwned;
|
||||
use tokio::fs::File;
|
||||
use tokio::process::Command;
|
||||
use tokio::sync::{oneshot, watch, Mutex, OnceCell};
|
||||
use tracing::instrument;
|
||||
|
||||
use super::service_effect_handler::{service_effect_handler, EffectContext};
|
||||
use super::ServiceActorSeed;
|
||||
use crate::context::RpcContext;
|
||||
use crate::disk::mount::filesystem::bind::Bind;
|
||||
use crate::disk::mount::filesystem::idmapped::IdMapped;
|
||||
use crate::disk::mount::filesystem::loop_dev::LoopDev;
|
||||
use crate::disk::mount::filesystem::overlayfs::OverlayGuard;
|
||||
use crate::disk::mount::filesystem::{MountType, ReadOnly};
|
||||
use crate::disk::mount::guard::{GenericMountGuard, MountGuard};
|
||||
use crate::lxc::{LxcConfig, LxcContainer, HOST_RPC_SERVER_SOCKET};
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::merkle_archive::source::FileSource;
|
||||
use crate::s9pk::S9pk;
|
||||
use crate::service::start_stop::StartStop;
|
||||
use crate::service::{rpc, RunningStatus};
|
||||
use crate::util::rpc_client::UnixRpcClient;
|
||||
use crate::util::Invoke;
|
||||
use crate::volume::{asset_dir, data_dir};
|
||||
use crate::ARCH;
|
||||
|
||||
const RPC_CONNECT_TIMEOUT: Duration = Duration::from_secs(10);
|
||||
|
||||
struct ProcedureId(u64);
|
||||
|
||||
// @DRB On top of this we need to also have the procedures to have the effects and get the results back for them, maybe lock them to the running instance?
|
||||
/// This contains the LXC container running the javascript init system
|
||||
/// that can be used via a JSON RPC Client connected to a unix domain
|
||||
/// socket served by the container
|
||||
pub struct PersistentContainer {
|
||||
pub(super) s9pk: S9pk,
|
||||
pub(super) lxc_container: OnceCell<LxcContainer>,
|
||||
rpc_client: UnixRpcClient,
|
||||
pub(super) rpc_server: watch::Sender<Option<(NonDetachingJoinHandle<()>, ShutdownHandle)>>,
|
||||
// procedures: Mutex<Vec<(ProcedureName, ProcedureId)>>,
|
||||
js_mount: MountGuard,
|
||||
volumes: BTreeMap<VolumeId, MountGuard>,
|
||||
assets: BTreeMap<VolumeId, MountGuard>,
|
||||
pub(super) overlays: Arc<Mutex<BTreeMap<InternedString, OverlayGuard>>>,
|
||||
pub(super) current_state: watch::Sender<StartStop>,
|
||||
// pub(super) desired_state: watch::Receiver<StartStop>,
|
||||
// pub(super) temp_desired_state: watch::Receiver<Option<StartStop>>,
|
||||
pub(super) running_status: watch::Sender<Option<RunningStatus>>,
|
||||
}
|
||||
|
||||
impl PersistentContainer {
|
||||
#[instrument(skip_all)]
|
||||
pub async fn new(
|
||||
ctx: &RpcContext,
|
||||
s9pk: S9pk,
|
||||
// desired_state: watch::Receiver<StartStop>,
|
||||
// temp_desired_state: watch::Receiver<Option<StartStop>>,
|
||||
) -> Result<Self, Error> {
|
||||
let lxc_container = ctx.lxc_manager.create(LxcConfig::default()).await?;
|
||||
let rpc_client = lxc_container.connect_rpc(Some(RPC_CONNECT_TIMEOUT)).await?;
|
||||
let js_mount = MountGuard::mount(
|
||||
&LoopDev::from(
|
||||
&**s9pk
|
||||
.as_archive()
|
||||
.contents()
|
||||
.get_path("javascript.squashfs")
|
||||
.and_then(|f| f.as_file())
|
||||
.or_not_found("javascript")?,
|
||||
),
|
||||
lxc_container.rootfs_dir().join("usr/lib/startos/package"),
|
||||
ReadOnly,
|
||||
)
|
||||
.await?;
|
||||
let mut volumes = BTreeMap::new();
|
||||
for volume in &s9pk.as_manifest().volumes {
|
||||
let mount = MountGuard::mount(
|
||||
&IdMapped::new(
|
||||
Bind::new(data_dir(&ctx.datadir, &s9pk.as_manifest().id, volume)),
|
||||
0,
|
||||
100000,
|
||||
65536,
|
||||
),
|
||||
lxc_container
|
||||
.rootfs_dir()
|
||||
.join("media/startos/volumes")
|
||||
.join(volume),
|
||||
MountType::ReadWrite,
|
||||
)
|
||||
.await?;
|
||||
volumes.insert(volume.clone(), mount);
|
||||
}
|
||||
let mut assets = BTreeMap::new();
|
||||
for asset in &s9pk.as_manifest().assets {
|
||||
assets.insert(
|
||||
asset.clone(),
|
||||
MountGuard::mount(
|
||||
&Bind::new(
|
||||
asset_dir(
|
||||
&ctx.datadir,
|
||||
&s9pk.as_manifest().id,
|
||||
&s9pk.as_manifest().version,
|
||||
)
|
||||
.join(asset),
|
||||
),
|
||||
lxc_container
|
||||
.rootfs_dir()
|
||||
.join("media/startos/assets")
|
||||
.join(asset),
|
||||
MountType::ReadWrite,
|
||||
)
|
||||
.await?,
|
||||
);
|
||||
}
|
||||
let image_path = lxc_container.rootfs_dir().join("media/startos/images");
|
||||
tokio::fs::create_dir_all(&image_path).await?;
|
||||
for image in &s9pk.as_manifest().images {
|
||||
let env_filename = Path::new(image.as_ref()).with_extension("env");
|
||||
if let Some(env) = s9pk
|
||||
.as_archive()
|
||||
.contents()
|
||||
.get_path(Path::new("images").join(&*ARCH).join(&env_filename))
|
||||
.and_then(|e| e.as_file())
|
||||
{
|
||||
env.copy(&mut File::create(image_path.join(&env_filename)).await?)
|
||||
.await?;
|
||||
}
|
||||
let json_filename = Path::new(image.as_ref()).with_extension("json");
|
||||
if let Some(json) = s9pk
|
||||
.as_archive()
|
||||
.contents()
|
||||
.get_path(Path::new("images").join(&*ARCH).join(&json_filename))
|
||||
.and_then(|e| e.as_file())
|
||||
{
|
||||
json.copy(&mut File::create(image_path.join(&json_filename)).await?)
|
||||
.await?;
|
||||
}
|
||||
}
|
||||
Ok(Self {
|
||||
s9pk,
|
||||
lxc_container: OnceCell::new_with(Some(lxc_container)),
|
||||
rpc_client,
|
||||
rpc_server: watch::channel(None).0,
|
||||
// procedures: Default::default(),
|
||||
js_mount,
|
||||
volumes,
|
||||
assets,
|
||||
overlays: Arc::new(Mutex::new(BTreeMap::new())),
|
||||
current_state: watch::channel(StartStop::Stop).0,
|
||||
// desired_state,
|
||||
// temp_desired_state,
|
||||
running_status: watch::channel(None).0,
|
||||
})
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn init(&self, seed: Weak<ServiceActorSeed>) -> Result<(), Error> {
|
||||
let socket_server_context = EffectContext::new(seed);
|
||||
let server = Server::new(
|
||||
move || ready(Ok(socket_server_context.clone())),
|
||||
service_effect_handler(),
|
||||
);
|
||||
let path = self
|
||||
.lxc_container
|
||||
.get()
|
||||
.ok_or_else(|| {
|
||||
Error::new(
|
||||
eyre!("PersistentContainer has been destroyed"),
|
||||
ErrorKind::Incoherent,
|
||||
)
|
||||
})?
|
||||
.rpc_dir()
|
||||
.join(HOST_RPC_SERVER_SOCKET);
|
||||
let (send, recv) = oneshot::channel();
|
||||
let handle = NonDetachingJoinHandle::from(tokio::spawn(async move {
|
||||
let (shutdown, fut) = match async {
|
||||
let res = server.run_unix(&path, |err| {
|
||||
tracing::error!("error on unix socket {}: {err}", path.display())
|
||||
})?;
|
||||
Command::new("chown")
|
||||
.arg("100000:100000")
|
||||
.arg(&path)
|
||||
.invoke(ErrorKind::Filesystem)
|
||||
.await?;
|
||||
Ok::<_, Error>(res)
|
||||
}
|
||||
.await
|
||||
{
|
||||
Ok((shutdown, fut)) => (Ok(shutdown), Some(fut)),
|
||||
Err(e) => (Err(e), None),
|
||||
};
|
||||
if send.send(shutdown).is_err() {
|
||||
panic!("failed to send shutdown handle");
|
||||
}
|
||||
if let Some(fut) = fut {
|
||||
fut.await;
|
||||
}
|
||||
}));
|
||||
let shutdown = recv.await.map_err(|_| {
|
||||
Error::new(
|
||||
eyre!("unix socket server thread panicked"),
|
||||
ErrorKind::Unknown,
|
||||
)
|
||||
})??;
|
||||
if self
|
||||
.rpc_server
|
||||
.send_replace(Some((handle, shutdown)))
|
||||
.is_some()
|
||||
{
|
||||
return Err(Error::new(
|
||||
eyre!("PersistentContainer already initialized"),
|
||||
ErrorKind::InvalidRequest,
|
||||
));
|
||||
}
|
||||
|
||||
self.rpc_client.request(rpc::Init, Empty {}).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
fn destroy(&mut self) -> impl Future<Output = Result<(), Error>> + 'static {
|
||||
let rpc_client = self.rpc_client.clone();
|
||||
let rpc_server = self.rpc_server.send_replace(None);
|
||||
let js_mount = self.js_mount.take();
|
||||
let volumes = std::mem::take(&mut self.volumes);
|
||||
let assets = std::mem::take(&mut self.assets);
|
||||
let overlays = self.overlays.clone();
|
||||
let lxc_container = self.lxc_container.take();
|
||||
async move {
|
||||
let mut errs = ErrorCollection::new();
|
||||
errs.handle(dbg!(rpc_client.request(rpc::Exit, Empty {}).await));
|
||||
if let Some((hdl, shutdown)) = rpc_server {
|
||||
shutdown.shutdown();
|
||||
errs.handle(hdl.await.with_kind(ErrorKind::Cancelled));
|
||||
}
|
||||
for (_, volume) in volumes {
|
||||
errs.handle(volume.unmount(true).await);
|
||||
}
|
||||
for (_, assets) in assets {
|
||||
errs.handle(assets.unmount(true).await);
|
||||
}
|
||||
for (_, overlay) in std::mem::take(&mut *overlays.lock().await) {
|
||||
errs.handle(overlay.unmount(true).await);
|
||||
}
|
||||
errs.handle(js_mount.unmount(true).await);
|
||||
if let Some(lxc_container) = lxc_container {
|
||||
errs.handle(lxc_container.exit().await);
|
||||
}
|
||||
errs.into_result()
|
||||
}
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn exit(mut self) -> Result<(), Error> {
|
||||
self.destroy().await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn start(&self) -> Result<(), Error> {
|
||||
self.execute(
|
||||
ProcedureName::StartMain,
|
||||
Value::Null,
|
||||
Some(Duration::from_secs(5)), // TODO
|
||||
)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn stop(&self, timeout: Option<Duration>) -> Result<(), Error> {
|
||||
self.execute(ProcedureName::StopMain, Value::Null, timeout)
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn execute<O>(
|
||||
&self,
|
||||
name: ProcedureName,
|
||||
input: Value,
|
||||
timeout: Option<Duration>,
|
||||
) -> Result<O, Error>
|
||||
where
|
||||
O: DeserializeOwned,
|
||||
{
|
||||
self._execute(name, input, timeout)
|
||||
.await
|
||||
.and_then(from_value)
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn sanboxed<O>(
|
||||
&self,
|
||||
name: ProcedureName,
|
||||
input: Value,
|
||||
timeout: Option<Duration>,
|
||||
) -> Result<O, Error>
|
||||
where
|
||||
O: DeserializeOwned,
|
||||
{
|
||||
self._sandboxed(name, input, timeout)
|
||||
.await
|
||||
.and_then(from_value)
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
async fn _execute(
|
||||
&self,
|
||||
name: ProcedureName,
|
||||
input: Value,
|
||||
timeout: Option<Duration>,
|
||||
) -> Result<Value, Error> {
|
||||
let fut = self
|
||||
.rpc_client
|
||||
.request(rpc::Execute, rpc::ExecuteParams::new(name, input, timeout));
|
||||
|
||||
Ok(if let Some(timeout) = timeout {
|
||||
tokio::time::timeout(timeout, fut)
|
||||
.await
|
||||
.with_kind(ErrorKind::Timeout)??
|
||||
} else {
|
||||
fut.await?
|
||||
})
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
async fn _sandboxed(
|
||||
&self,
|
||||
name: ProcedureName,
|
||||
input: Value,
|
||||
timeout: Option<Duration>,
|
||||
) -> Result<Value, Error> {
|
||||
let fut = self
|
||||
.rpc_client
|
||||
.request(rpc::Sandbox, rpc::ExecuteParams::new(name, input, timeout));
|
||||
|
||||
Ok(if let Some(timeout) = timeout {
|
||||
tokio::time::timeout(timeout, fut)
|
||||
.await
|
||||
.with_kind(ErrorKind::Timeout)??
|
||||
} else {
|
||||
fut.await?
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for PersistentContainer {
|
||||
fn drop(&mut self) {
|
||||
let destroy = self.destroy();
|
||||
tokio::spawn(async move { destroy.await.unwrap() });
|
||||
}
|
||||
}
|
||||
96
core/startos/src/service/rpc.rs
Normal file
96
core/startos/src/service/rpc.rs
Normal file
@@ -0,0 +1,96 @@
|
||||
use std::time::Duration;
|
||||
|
||||
use imbl_value::Value;
|
||||
use models::ProcedureName;
|
||||
use rpc_toolkit::yajrc::{RpcError, RpcMethod};
|
||||
use rpc_toolkit::Empty;
|
||||
|
||||
use crate::prelude::*;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Init;
|
||||
impl RpcMethod for Init {
|
||||
type Params = Empty;
|
||||
type Response = ();
|
||||
fn as_str<'a>(&'a self) -> &'a str {
|
||||
"init"
|
||||
}
|
||||
}
|
||||
impl serde::Serialize for Init {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: serde::Serializer,
|
||||
{
|
||||
serializer.serialize_str(self.as_str())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Exit;
|
||||
impl RpcMethod for Exit {
|
||||
type Params = Empty;
|
||||
type Response = ();
|
||||
fn as_str<'a>(&'a self) -> &'a str {
|
||||
"exit"
|
||||
}
|
||||
}
|
||||
impl serde::Serialize for Exit {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: serde::Serializer,
|
||||
{
|
||||
serializer.serialize_str(self.as_str())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone, serde::Deserialize, serde::Serialize)]
|
||||
pub struct ExecuteParams {
|
||||
procedure: String,
|
||||
input: Value,
|
||||
timeout: Option<u128>,
|
||||
}
|
||||
impl ExecuteParams {
|
||||
pub fn new(procedure: ProcedureName, input: Value, timeout: Option<Duration>) -> Self {
|
||||
Self {
|
||||
procedure: procedure.js_function_name(),
|
||||
input,
|
||||
timeout: timeout.map(|d| d.as_millis()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Execute;
|
||||
impl RpcMethod for Execute {
|
||||
type Params = ExecuteParams;
|
||||
type Response = Value;
|
||||
fn as_str<'a>(&'a self) -> &'a str {
|
||||
"execute"
|
||||
}
|
||||
}
|
||||
impl serde::Serialize for Execute {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: serde::Serializer,
|
||||
{
|
||||
serializer.serialize_str(self.as_str())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Sandbox;
|
||||
impl RpcMethod for Sandbox {
|
||||
type Params = ExecuteParams;
|
||||
type Response = Value;
|
||||
fn as_str<'a>(&'a self) -> &'a str {
|
||||
"sandbox"
|
||||
}
|
||||
}
|
||||
impl serde::Serialize for Sandbox {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: serde::Serializer,
|
||||
{
|
||||
serializer.serialize_str(self.as_str())
|
||||
}
|
||||
}
|
||||
684
core/startos/src/service/service_effect_handler.rs
Normal file
684
core/startos/src/service/service_effect_handler.rs
Normal file
@@ -0,0 +1,684 @@
|
||||
use std::ffi::OsString;
|
||||
use std::os::unix::process::CommandExt;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::str::FromStr;
|
||||
use std::sync::{Arc, Weak};
|
||||
|
||||
use clap::builder::{TypedValueParser, ValueParserFactory};
|
||||
use clap::Parser;
|
||||
use imbl_value::json;
|
||||
use models::{ActionId, HealthCheckId, ImageId, PackageId};
|
||||
use patch_db::json_ptr::JsonPointer;
|
||||
use rpc_toolkit::{from_fn, from_fn_async, AnyContext, Context, Empty, HandlerExt, ParentHandler};
|
||||
use tokio::process::Command;
|
||||
|
||||
use crate::disk::mount::filesystem::idmapped::IdMapped;
|
||||
use crate::disk::mount::filesystem::loop_dev::LoopDev;
|
||||
use crate::disk::mount::filesystem::overlayfs::OverlayGuard;
|
||||
use crate::prelude::*;
|
||||
use crate::s9pk::rpc::SKIP_ENV;
|
||||
use crate::service::cli::ContainerCliContext;
|
||||
use crate::service::start_stop::StartStop;
|
||||
use crate::service::ServiceActorSeed;
|
||||
use crate::status::health_check::HealthCheckResult;
|
||||
use crate::status::MainStatus;
|
||||
use crate::util::clap::FromStrParser;
|
||||
use crate::util::new_guid;
|
||||
use crate::{db::model::ExposedUI, util::Invoke};
|
||||
use crate::{echo, ARCH};
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(super) struct EffectContext(Weak<ServiceActorSeed>);
|
||||
impl EffectContext {
|
||||
pub fn new(seed: Weak<ServiceActorSeed>) -> Self {
|
||||
Self(seed)
|
||||
}
|
||||
}
|
||||
impl Context for EffectContext {}
|
||||
impl EffectContext {
|
||||
fn deref(&self) -> Result<Arc<ServiceActorSeed>, Error> {
|
||||
if let Some(seed) = Weak::upgrade(&self.0) {
|
||||
Ok(seed)
|
||||
} else {
|
||||
Err(Error::new(
|
||||
eyre!("Service has already been destroyed"),
|
||||
ErrorKind::InvalidRequest,
|
||||
))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
struct RpcData {
|
||||
id: i64,
|
||||
method: String,
|
||||
params: Value,
|
||||
}
|
||||
pub fn service_effect_handler() -> ParentHandler {
|
||||
ParentHandler::new()
|
||||
.subcommand("gitInfo", from_fn(crate::version::git_info))
|
||||
.subcommand(
|
||||
"echo",
|
||||
from_fn(echo).with_remote_cli::<ContainerCliContext>(),
|
||||
)
|
||||
.subcommand("chroot", from_fn(chroot).no_display())
|
||||
.subcommand("exists", from_fn_async(exists).no_cli())
|
||||
.subcommand("executeAction", from_fn_async(execute_action).no_cli())
|
||||
.subcommand("getConfigured", from_fn_async(get_configured).no_cli())
|
||||
.subcommand(
|
||||
"stopped",
|
||||
from_fn_async(stopped)
|
||||
.no_display()
|
||||
.with_remote_cli::<ContainerCliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"running",
|
||||
from_fn_async(running)
|
||||
.no_display()
|
||||
.with_remote_cli::<ContainerCliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"restart",
|
||||
from_fn_async(restart)
|
||||
.no_display()
|
||||
.with_remote_cli::<ContainerCliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"shutdown",
|
||||
from_fn_async(shutdown)
|
||||
.no_display()
|
||||
.with_remote_cli::<ContainerCliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"setConfigured",
|
||||
from_fn_async(set_configured)
|
||||
.no_display()
|
||||
.with_remote_cli::<ContainerCliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"setMainStatus",
|
||||
from_fn_async(set_main_status).with_remote_cli::<ContainerCliContext>(),
|
||||
)
|
||||
.subcommand("setHealth", from_fn_async(set_health).no_cli())
|
||||
.subcommand("getStore", from_fn_async(get_store).no_cli())
|
||||
.subcommand("setStore", from_fn_async(set_store).no_cli())
|
||||
.subcommand(
|
||||
"exposeForDependents",
|
||||
from_fn_async(expose_for_dependents).no_cli(),
|
||||
)
|
||||
.subcommand("exposeUi", from_fn_async(expose_ui).no_cli())
|
||||
.subcommand(
|
||||
"createOverlayedImage",
|
||||
from_fn_async(create_overlayed_image)
|
||||
.with_custom_display_fn::<AnyContext, _>(|_, path| {
|
||||
Ok(println!("{}", path.display()))
|
||||
})
|
||||
.with_remote_cli::<ContainerCliContext>(),
|
||||
)
|
||||
.subcommand(
|
||||
"getSslCertificate",
|
||||
from_fn_async(get_ssl_certificate).no_cli(),
|
||||
)
|
||||
.subcommand("getSslKey", from_fn_async(get_ssl_key).no_cli())
|
||||
// TODO @DrBonez when we get the new api for 4.0
|
||||
// .subcommand("setDependencies",from_fn(set_dependencies))
|
||||
// .subcommand("embassyGetInterface",from_fn(embassy_get_interface))
|
||||
// .subcommand("mount",from_fn(mount))
|
||||
// .subcommand("removeAction",from_fn(remove_action))
|
||||
// .subcommand("removeAddress",from_fn(remove_address))
|
||||
// .subcommand("exportAction",from_fn(export_action))
|
||||
// .subcommand("bind",from_fn(bind))
|
||||
// .subcommand("clearNetworkInterfaces",from_fn(clear_network_interfaces))
|
||||
// .subcommand("exportNetworkInterface",from_fn(export_network_interface))
|
||||
// .subcommand("clearBindings",from_fn(clear_bindings))
|
||||
// .subcommand("getHostnames",from_fn(get_hostnames))
|
||||
// .subcommand("getInterface",from_fn(get_interface))
|
||||
// .subcommand("listInterface",from_fn(list_interface))
|
||||
// .subcommand("getIPHostname",from_fn(get_ip_hostname))
|
||||
// .subcommand("getContainerIp",from_fn(get_container_ip))
|
||||
// .subcommand("getLocalHostname",from_fn(get_local_hostname))
|
||||
// .subcommand("getPrimaryUrl",from_fn(get_primary_url))
|
||||
// .subcommand("getServicePortForward",from_fn(get_service_port_forward))
|
||||
// .subcommand("getServiceTorHostname",from_fn(get_service_tor_hostname))
|
||||
// .subcommand("getSystemSmtp",from_fn(get_system_smtp))
|
||||
// .subcommand("reverseProxy",from_fn(reverse_pro)xy)
|
||||
// TODO Callbacks
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, Parser)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct ChrootParams {
|
||||
#[arg(short = 'e', long = "env")]
|
||||
env: Option<PathBuf>,
|
||||
#[arg(short = 'w', long = "workdir")]
|
||||
workdir: Option<PathBuf>,
|
||||
#[arg(short = 'u', long = "user")]
|
||||
user: Option<String>,
|
||||
path: PathBuf,
|
||||
command: OsString,
|
||||
args: Vec<OsString>,
|
||||
}
|
||||
fn chroot(
|
||||
_: AnyContext,
|
||||
ChrootParams {
|
||||
env,
|
||||
workdir,
|
||||
user,
|
||||
path,
|
||||
command,
|
||||
args,
|
||||
}: ChrootParams,
|
||||
) -> Result<(), Error> {
|
||||
let mut cmd = std::process::Command::new(command);
|
||||
if let Some(env) = env {
|
||||
for (k, v) in std::fs::read_to_string(env)?
|
||||
.lines()
|
||||
.map(|l| l.trim())
|
||||
.filter_map(|l| l.split_once("="))
|
||||
.filter(|(k, _)| !SKIP_ENV.contains(&k))
|
||||
{
|
||||
cmd.env(k, v);
|
||||
}
|
||||
}
|
||||
std::os::unix::fs::chroot(path)?;
|
||||
if let Some(uid) = user.as_deref().and_then(|u| u.parse::<u32>().ok()) {
|
||||
cmd.uid(uid);
|
||||
} else if let Some(user) = user {
|
||||
let (uid, gid) = std::fs::read_to_string("/etc/passwd")?
|
||||
.lines()
|
||||
.find_map(|l| {
|
||||
let mut split = l.trim().split(":");
|
||||
if user != split.next()? {
|
||||
return None;
|
||||
}
|
||||
split.next(); // throw away x
|
||||
Some((split.next()?.parse().ok()?, split.next()?.parse().ok()?))
|
||||
// uid gid
|
||||
})
|
||||
.or_not_found(lazy_format!("{user} in /etc/passwd"))?;
|
||||
cmd.uid(uid);
|
||||
cmd.gid(gid);
|
||||
};
|
||||
if let Some(workdir) = workdir {
|
||||
cmd.current_dir(workdir);
|
||||
}
|
||||
cmd.args(args);
|
||||
Err(cmd.exec().into())
|
||||
}
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct GetSslCertificateParams {
|
||||
package_id: Option<String>,
|
||||
algorithm: Option<String>, //"ecdsa" | "ed25519"
|
||||
}
|
||||
|
||||
async fn get_ssl_certificate(
|
||||
context: EffectContext,
|
||||
GetSslCertificateParams {
|
||||
package_id,
|
||||
algorithm,
|
||||
}: GetSslCertificateParams,
|
||||
) -> Result<Value, Error> {
|
||||
let fake = include_str!("./fake.cert.pem");
|
||||
Ok(json!([fake, fake, fake]))
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct GetSslKeyParams {
|
||||
package_id: Option<String>,
|
||||
algorithm: Option<String>, //"ecdsa" | "ed25519"
|
||||
}
|
||||
|
||||
async fn get_ssl_key(
|
||||
context: EffectContext,
|
||||
GetSslKeyParams {
|
||||
package_id,
|
||||
algorithm,
|
||||
}: GetSslKeyParams,
|
||||
) -> Result<Value, Error> {
|
||||
let fake = include_str!("./fake.cert.key");
|
||||
Ok(json!(fake))
|
||||
}
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct GetStoreParams {
|
||||
package_id: Option<PackageId>,
|
||||
path: JsonPointer,
|
||||
}
|
||||
|
||||
async fn get_store(
|
||||
context: EffectContext,
|
||||
GetStoreParams { package_id, path }: GetStoreParams,
|
||||
) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let peeked = context.ctx.db.peek().await;
|
||||
let package_id = package_id.unwrap_or(context.id.clone());
|
||||
let value = peeked
|
||||
.as_package_data()
|
||||
.as_idx(&package_id)
|
||||
.or_not_found(&package_id)?
|
||||
.as_installed()
|
||||
.or_not_found(&package_id)?
|
||||
.as_store()
|
||||
.de()?;
|
||||
|
||||
Ok(path
|
||||
.get(&value)
|
||||
.ok_or_else(|| Error::new(eyre!("Did not find value at path"), ErrorKind::NotFound))?
|
||||
.clone())
|
||||
}
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct SetStoreParams {
|
||||
value: Value,
|
||||
path: JsonPointer,
|
||||
}
|
||||
|
||||
async fn set_store(
|
||||
context: EffectContext,
|
||||
SetStoreParams { value, path }: SetStoreParams,
|
||||
) -> Result<(), Error> {
|
||||
let context = context.deref()?;
|
||||
let package_id = context.id.clone();
|
||||
context
|
||||
.ctx
|
||||
.db
|
||||
.mutate(|db| {
|
||||
let model = db
|
||||
.as_package_data_mut()
|
||||
.as_idx_mut(&package_id)
|
||||
.or_not_found(&package_id)?
|
||||
.as_installed_mut()
|
||||
.or_not_found(&package_id)?
|
||||
.as_store_mut();
|
||||
let mut model_value = model.de()?;
|
||||
path.set(&mut model_value, value, true)
|
||||
.with_kind(ErrorKind::ParseDbField)?;
|
||||
model.ser(&model_value)
|
||||
})
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct ExposeForDependentsParams {
|
||||
paths: Vec<JsonPointer>,
|
||||
}
|
||||
|
||||
async fn expose_for_dependents(
|
||||
context: EffectContext,
|
||||
ExposeForDependentsParams { paths }: ExposeForDependentsParams,
|
||||
) -> Result<(), Error> {
|
||||
let context = context.deref()?;
|
||||
let package_id = context.id.clone();
|
||||
context
|
||||
.ctx
|
||||
.db
|
||||
.mutate(|db| {
|
||||
db.as_package_data_mut()
|
||||
.as_idx_mut(&package_id)
|
||||
.or_not_found(&package_id)?
|
||||
.as_installed_mut()
|
||||
.or_not_found(&package_id)?
|
||||
.as_store_exposed_dependents_mut()
|
||||
.ser(&paths)
|
||||
})
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct ExposeUiParams {
|
||||
paths: Vec<ExposedUI>,
|
||||
}
|
||||
|
||||
async fn expose_ui(
|
||||
context: EffectContext,
|
||||
ExposeUiParams { paths }: ExposeUiParams,
|
||||
) -> Result<(), Error> {
|
||||
let context = context.deref()?;
|
||||
let package_id = context.id.clone();
|
||||
context
|
||||
.ctx
|
||||
.db
|
||||
.mutate(|db| {
|
||||
db.as_package_data_mut()
|
||||
.as_idx_mut(&package_id)
|
||||
.or_not_found(&package_id)?
|
||||
.as_installed_mut()
|
||||
.or_not_found(&package_id)?
|
||||
.as_store_exposed_ui_mut()
|
||||
.ser(&paths)
|
||||
})
|
||||
.await?;
|
||||
Ok(())
|
||||
}
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
struct ParamsPackageId {
|
||||
package: PackageId,
|
||||
}
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, Parser)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[command(rename_all = "camelCase")]
|
||||
struct ParamsMaybePackageId {
|
||||
package_id: Option<PackageId>,
|
||||
}
|
||||
|
||||
async fn exists(context: EffectContext, params: ParamsPackageId) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let peeked = context.ctx.db.peek().await;
|
||||
let package = peeked.as_package_data().as_idx(¶ms.package).is_some();
|
||||
Ok(json!(package))
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct ExecuteAction {
|
||||
service_id: Option<PackageId>,
|
||||
action_id: ActionId,
|
||||
input: Value,
|
||||
}
|
||||
async fn execute_action(
|
||||
context: EffectContext,
|
||||
ExecuteAction {
|
||||
action_id,
|
||||
input,
|
||||
service_id,
|
||||
}: ExecuteAction,
|
||||
) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let package_id = service_id.clone().unwrap_or_else(|| context.id.clone());
|
||||
let service = context.ctx.services.get(&package_id).await;
|
||||
let service = service.as_ref().ok_or_else(|| {
|
||||
Error::new(
|
||||
eyre!("Could not find package {package_id}"),
|
||||
ErrorKind::Unknown,
|
||||
)
|
||||
})?;
|
||||
|
||||
Ok(json!(service.action(action_id, input).await?))
|
||||
}
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct FromService {}
|
||||
async fn get_configured(context: EffectContext, _: Empty) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let peeked = context.ctx.db.peek().await;
|
||||
let package_id = &context.id;
|
||||
let package = peeked
|
||||
.as_package_data()
|
||||
.as_idx(&package_id)
|
||||
.or_not_found(&package_id)?
|
||||
.as_installed()
|
||||
.or_not_found(&package_id)?
|
||||
.as_status()
|
||||
.as_configured()
|
||||
.de()?;
|
||||
Ok(json!(package))
|
||||
}
|
||||
|
||||
async fn stopped(context: EffectContext, params: ParamsMaybePackageId) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let peeked = context.ctx.db.peek().await;
|
||||
let package_id = params.package_id.unwrap_or_else(|| context.id.clone());
|
||||
let package = peeked
|
||||
.as_package_data()
|
||||
.as_idx(&package_id)
|
||||
.or_not_found(&package_id)?
|
||||
.as_installed()
|
||||
.or_not_found(&package_id)?
|
||||
.as_status()
|
||||
.as_main()
|
||||
.de()?;
|
||||
Ok(json!(matches!(package, MainStatus::Stopped)))
|
||||
}
|
||||
async fn running(context: EffectContext, params: ParamsMaybePackageId) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let peeked = context.ctx.db.peek().await;
|
||||
let package_id = params.package_id.unwrap_or_else(|| context.id.clone());
|
||||
let package = peeked
|
||||
.as_package_data()
|
||||
.as_idx(&package_id)
|
||||
.or_not_found(&package_id)?
|
||||
.as_installed()
|
||||
.or_not_found(&package_id)?
|
||||
.as_status()
|
||||
.as_main()
|
||||
.de()?;
|
||||
Ok(json!(matches!(package, MainStatus::Running { .. })))
|
||||
}
|
||||
|
||||
async fn restart(context: EffectContext, _: Empty) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let service = context.ctx.services.get(&context.id).await;
|
||||
let service = service.as_ref().ok_or_else(|| {
|
||||
Error::new(
|
||||
eyre!("Could not find package {}", context.id),
|
||||
ErrorKind::Unknown,
|
||||
)
|
||||
})?;
|
||||
service.restart().await?;
|
||||
Ok(json!(()))
|
||||
}
|
||||
|
||||
async fn shutdown(context: EffectContext, _: Empty) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let service = context.ctx.services.get(&context.id).await;
|
||||
let service = service.as_ref().ok_or_else(|| {
|
||||
Error::new(
|
||||
eyre!("Could not find package {}", context.id),
|
||||
ErrorKind::Unknown,
|
||||
)
|
||||
})?;
|
||||
service.stop().await?;
|
||||
Ok(json!(()))
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, Parser)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[command(rename_all = "camelCase")]
|
||||
struct SetConfigured {
|
||||
configured: bool,
|
||||
}
|
||||
async fn set_configured(context: EffectContext, params: SetConfigured) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
let package_id = &context.id;
|
||||
context
|
||||
.ctx
|
||||
.db
|
||||
.mutate(|db| {
|
||||
db.as_package_data_mut()
|
||||
.as_idx_mut(package_id)
|
||||
.or_not_found(package_id)?
|
||||
.as_installed_mut()
|
||||
.or_not_found(package_id)?
|
||||
.as_status_mut()
|
||||
.as_configured_mut()
|
||||
.ser(¶ms.configured)
|
||||
})
|
||||
.await?;
|
||||
Ok(json!(()))
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
enum Status {
|
||||
Running,
|
||||
Stopped,
|
||||
}
|
||||
impl FromStr for Status {
|
||||
type Err = color_eyre::eyre::Report;
|
||||
fn from_str(s: &str) -> Result<Self, Self::Err> {
|
||||
match s {
|
||||
"running" => Ok(Self::Running),
|
||||
"stopped" => Ok(Self::Stopped),
|
||||
_ => Err(eyre!("unknown status {s}")),
|
||||
}
|
||||
}
|
||||
}
|
||||
impl ValueParserFactory for Status {
|
||||
type Parser = FromStrParser<Self>;
|
||||
fn value_parser() -> Self::Parser {
|
||||
FromStrParser::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize, Parser)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[command(rename_all = "camelCase")]
|
||||
struct SetMainStatus {
|
||||
status: Status,
|
||||
}
|
||||
async fn set_main_status(context: EffectContext, params: SetMainStatus) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
context
|
||||
.persistent_container
|
||||
.current_state
|
||||
.send_replace(match params.status {
|
||||
Status::Running => StartStop::Start,
|
||||
Status::Stopped => StartStop::Stop,
|
||||
});
|
||||
Ok(Value::Null)
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct SetHealth {
|
||||
name: HealthCheckId,
|
||||
health_result: Option<HealthCheckResult>,
|
||||
}
|
||||
|
||||
async fn set_health(context: EffectContext, params: SetHealth) -> Result<Value, Error> {
|
||||
let context = context.deref()?;
|
||||
// TODO DrBonez + BLU-J Need to change the type from
|
||||
// ```rs
|
||||
// #[serde(tag = "result")]
|
||||
// pub enum HealthCheckResult {
|
||||
// Success,
|
||||
// Disabled,
|
||||
// Starting,
|
||||
// Loading { message: String },
|
||||
// Failure { error: String },
|
||||
// }
|
||||
// ```
|
||||
// to
|
||||
// ```ts
|
||||
// setHealth(o: {
|
||||
// name: string
|
||||
// status: HealthStatus
|
||||
// message?: string
|
||||
// }): Promise<void>
|
||||
// ```
|
||||
|
||||
let package_id = &context.id;
|
||||
context
|
||||
.ctx
|
||||
.db
|
||||
.mutate(move |db| {
|
||||
let mut main = db
|
||||
.as_package_data()
|
||||
.as_idx(package_id)
|
||||
.or_not_found(package_id)?
|
||||
.as_installed()
|
||||
.or_not_found(package_id)?
|
||||
.as_status()
|
||||
.as_main()
|
||||
.de()?;
|
||||
match &mut main {
|
||||
&mut MainStatus::Running { ref mut health, .. }
|
||||
| &mut MainStatus::BackingUp { ref mut health, .. } => {
|
||||
health.remove(¶ms.name);
|
||||
if let SetHealth {
|
||||
name,
|
||||
health_result: Some(health_result),
|
||||
} = params
|
||||
{
|
||||
health.insert(name, health_result);
|
||||
}
|
||||
}
|
||||
_ => return Ok(()),
|
||||
};
|
||||
db.as_package_data_mut()
|
||||
.as_idx_mut(package_id)
|
||||
.or_not_found(package_id)?
|
||||
.as_installed_mut()
|
||||
.or_not_found(package_id)?
|
||||
.as_status_mut()
|
||||
.as_main_mut()
|
||||
.ser(&main)
|
||||
})
|
||||
.await?;
|
||||
Ok(json!(()))
|
||||
}
|
||||
|
||||
#[derive(serde::Deserialize, serde::Serialize, Parser)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
#[command(rename_all = "camelCase")]
|
||||
pub struct CreateOverlayedImageParams {
|
||||
image_id: ImageId,
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn create_overlayed_image(
|
||||
ctx: EffectContext,
|
||||
CreateOverlayedImageParams { image_id }: CreateOverlayedImageParams,
|
||||
) -> Result<PathBuf, Error> {
|
||||
let ctx = ctx.deref()?;
|
||||
let path = Path::new("images")
|
||||
.join(&*ARCH)
|
||||
.join(&image_id)
|
||||
.with_extension("squashfs");
|
||||
if let Some(image) = ctx
|
||||
.persistent_container
|
||||
.s9pk
|
||||
.as_archive()
|
||||
.contents()
|
||||
.get_path(dbg!(&path))
|
||||
.and_then(|e| e.as_file())
|
||||
{
|
||||
let guid = new_guid();
|
||||
let rootfs_dir = ctx
|
||||
.persistent_container
|
||||
.lxc_container
|
||||
.get()
|
||||
.ok_or_else(|| {
|
||||
Error::new(
|
||||
eyre!("PersistentContainer has been destroyed"),
|
||||
ErrorKind::Incoherent,
|
||||
)
|
||||
})?
|
||||
.rootfs_dir();
|
||||
let mountpoint = rootfs_dir.join("media/startos/overlays").join(&*guid);
|
||||
tokio::fs::create_dir_all(&mountpoint).await?;
|
||||
Command::new("chown")
|
||||
.arg("100000:100000")
|
||||
.arg(&mountpoint)
|
||||
.invoke(ErrorKind::Filesystem)
|
||||
.await?;
|
||||
let container_mountpoint = Path::new("/").join(
|
||||
mountpoint
|
||||
.strip_prefix(rootfs_dir)
|
||||
.with_kind(ErrorKind::Incoherent)?,
|
||||
);
|
||||
tracing::info!("Mounting overlay {guid} for {image_id}");
|
||||
let guard = OverlayGuard::mount(
|
||||
&IdMapped::new(LoopDev::from(&**image), 0, 100000, 65536),
|
||||
mountpoint,
|
||||
)
|
||||
.await?;
|
||||
tracing::info!("Mounted overlay {guid} for {image_id}");
|
||||
ctx.persistent_container
|
||||
.overlays
|
||||
.lock()
|
||||
.await
|
||||
.insert(guid.clone(), guard);
|
||||
Ok(container_mountpoint)
|
||||
} else {
|
||||
Err(Error::new(
|
||||
eyre!("image {image_id} not found in s9pk"),
|
||||
ErrorKind::NotFound,
|
||||
))
|
||||
}
|
||||
}
|
||||
384
core/startos/src/service/service_map.rs
Normal file
384
core/startos/src/service/service_map.rs
Normal file
@@ -0,0 +1,384 @@
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use color_eyre::eyre::eyre;
|
||||
use futures::future::BoxFuture;
|
||||
use futures::{Future, FutureExt};
|
||||
use helpers::NonDetachingJoinHandle;
|
||||
use imbl::OrdMap;
|
||||
use imbl_value::InternedString;
|
||||
use tokio::sync::{Mutex, OwnedRwLockReadGuard, OwnedRwLockWriteGuard, RwLock};
|
||||
use tracing::instrument;
|
||||
|
||||
use crate::context::RpcContext;
|
||||
use crate::db::model::{
|
||||
InstalledPackageInfo, PackageDataEntry, PackageDataEntryInstalled, PackageDataEntryInstalling,
|
||||
PackageDataEntryRestoring, PackageDataEntryUpdating, StaticFiles,
|
||||
};
|
||||
use crate::disk::mount::guard::GenericMountGuard;
|
||||
use crate::install::PKG_ARCHIVE_DIR;
|
||||
use crate::notifications::NotificationLevel;
|
||||
use crate::prelude::*;
|
||||
use crate::progress::{
|
||||
FullProgressTracker, FullProgressTrackerHandle, PhaseProgressTrackerHandle,
|
||||
ProgressTrackerWriter,
|
||||
};
|
||||
use crate::s9pk::manifest::PackageId;
|
||||
use crate::s9pk::merkle_archive::source::FileSource;
|
||||
use crate::s9pk::S9pk;
|
||||
use crate::service::{LoadDisposition, Service};
|
||||
|
||||
pub type DownloadInstallFuture = BoxFuture<'static, Result<InstallFuture, Error>>;
|
||||
pub type InstallFuture = BoxFuture<'static, Result<(), Error>>;
|
||||
|
||||
pub(super) struct InstallProgressHandles {
|
||||
pub(super) finalization_progress: PhaseProgressTrackerHandle,
|
||||
pub(super) progress_handle: FullProgressTrackerHandle,
|
||||
}
|
||||
|
||||
/// This is the structure to contain all the services
|
||||
#[derive(Default)]
|
||||
pub struct ServiceMap(Mutex<OrdMap<PackageId, Arc<RwLock<Option<Service>>>>>);
|
||||
impl ServiceMap {
|
||||
async fn entry(&self, id: &PackageId) -> Arc<RwLock<Option<Service>>> {
|
||||
self.0
|
||||
.lock()
|
||||
.await
|
||||
.entry(id.clone())
|
||||
.or_insert_with(|| Arc::new(RwLock::new(None)))
|
||||
.clone()
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn get(&self, id: &PackageId) -> OwnedRwLockReadGuard<Option<Service>> {
|
||||
self.entry(id).await.read_owned().await
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn get_mut(&self, id: &PackageId) -> OwnedRwLockWriteGuard<Option<Service>> {
|
||||
self.entry(id).await.write_owned().await
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn init(&self, ctx: &RpcContext) -> Result<(), Error> {
|
||||
for id in ctx.db.peek().await.as_package_data().keys()? {
|
||||
if let Err(e) = self.load(ctx, &id, LoadDisposition::Retry).await {
|
||||
tracing::error!("Error loading installed package as service: {e}");
|
||||
tracing::debug!("{e:?}");
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn load(
|
||||
&self,
|
||||
ctx: &RpcContext,
|
||||
id: &PackageId,
|
||||
disposition: LoadDisposition,
|
||||
) -> Result<(), Error> {
|
||||
let mut shutdown_err = Ok(());
|
||||
let mut service = self.get_mut(id).await;
|
||||
if let Some(service) = service.take() {
|
||||
shutdown_err = service.shutdown().await;
|
||||
}
|
||||
// TODO: retry on error?
|
||||
*service = Service::load(ctx, id, disposition).await?;
|
||||
shutdown_err?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[instrument(skip_all)]
|
||||
pub async fn install<S: FileSource>(
|
||||
&self,
|
||||
ctx: RpcContext,
|
||||
mut s9pk: S9pk<S>,
|
||||
recovery_source: Option<impl GenericMountGuard>,
|
||||
) -> Result<DownloadInstallFuture, Error> {
|
||||
let manifest = Arc::new(s9pk.as_manifest().clone());
|
||||
let id = manifest.id.clone();
|
||||
let icon = s9pk.icon_data_url().await?;
|
||||
let mut service = self.get_mut(&id).await;
|
||||
|
||||
let op_name = if recovery_source.is_none() {
|
||||
if service.is_none() {
|
||||
"Install"
|
||||
} else {
|
||||
"Update"
|
||||
}
|
||||
} else {
|
||||
"Restore"
|
||||
};
|
||||
|
||||
let size = s9pk.size();
|
||||
let mut progress = FullProgressTracker::new();
|
||||
let download_progress_contribution = size.unwrap_or(60);
|
||||
let progress_handle = progress.handle();
|
||||
let mut download_progress = progress_handle.add_phase(
|
||||
InternedString::intern("Download"),
|
||||
Some(download_progress_contribution),
|
||||
);
|
||||
if let Some(size) = size {
|
||||
download_progress.set_total(size);
|
||||
}
|
||||
let mut finalization_progress = progress_handle.add_phase(
|
||||
InternedString::intern(op_name),
|
||||
Some(download_progress_contribution / 2),
|
||||
);
|
||||
let restoring = recovery_source.is_some();
|
||||
|
||||
let mut reload_guard = ServiceReloadGuard::new(ctx.clone(), id.clone(), op_name);
|
||||
|
||||
reload_guard
|
||||
.handle(ctx.db.mutate({
|
||||
let manifest = manifest.clone();
|
||||
let id = id.clone();
|
||||
let install_progress = progress.snapshot();
|
||||
move |db| {
|
||||
let pde = match db
|
||||
.as_package_data()
|
||||
.as_idx(&id)
|
||||
.map(|x| x.de())
|
||||
.transpose()?
|
||||
{
|
||||
Some(PackageDataEntry::Installed(PackageDataEntryInstalled {
|
||||
installed,
|
||||
static_files,
|
||||
..
|
||||
})) => PackageDataEntry::Updating(PackageDataEntryUpdating {
|
||||
install_progress,
|
||||
installed,
|
||||
manifest: (*manifest).clone(),
|
||||
static_files,
|
||||
}),
|
||||
None if restoring => {
|
||||
PackageDataEntry::Restoring(PackageDataEntryRestoring {
|
||||
install_progress,
|
||||
static_files: StaticFiles::local(
|
||||
&manifest.id,
|
||||
&manifest.version,
|
||||
icon,
|
||||
),
|
||||
manifest: (*manifest).clone(),
|
||||
})
|
||||
}
|
||||
None => PackageDataEntry::Installing(PackageDataEntryInstalling {
|
||||
install_progress,
|
||||
static_files: StaticFiles::local(&manifest.id, &manifest.version, icon),
|
||||
manifest: (*manifest).clone(),
|
||||
}),
|
||||
_ => {
|
||||
return Err(Error::new(
|
||||
eyre!("Cannot install over a package in a transient state"),
|
||||
crate::ErrorKind::InvalidRequest,
|
||||
))
|
||||
}
|
||||
};
|
||||
db.as_package_data_mut().insert(&manifest.id, &pde)
|
||||
}
|
||||
}))
|
||||
.await?;
|
||||
|
||||
Ok(async move {
|
||||
let (installed_path, sync_progress_task) = reload_guard
|
||||
.handle(async {
|
||||
let download_path = ctx
|
||||
.datadir
|
||||
.join(PKG_ARCHIVE_DIR)
|
||||
.join("downloading")
|
||||
.join(&id)
|
||||
.with_extension("s9pk");
|
||||
|
||||
let deref_id = id.clone();
|
||||
let sync_progress_task =
|
||||
NonDetachingJoinHandle::from(tokio::spawn(progress.sync_to_db(
|
||||
ctx.db.clone(),
|
||||
move |v| {
|
||||
v.as_package_data_mut()
|
||||
.as_idx_mut(&deref_id)
|
||||
.and_then(|e| e.as_install_progress_mut())
|
||||
},
|
||||
Some(Duration::from_millis(100)),
|
||||
)));
|
||||
|
||||
let mut progress_writer = ProgressTrackerWriter::new(
|
||||
crate::util::io::create_file(&download_path).await?,
|
||||
download_progress,
|
||||
);
|
||||
s9pk.serialize(&mut progress_writer, true).await?;
|
||||
let (file, mut download_progress) = progress_writer.into_inner();
|
||||
file.sync_all().await?;
|
||||
download_progress.complete();
|
||||
|
||||
let installed_path = ctx
|
||||
.datadir
|
||||
.join(PKG_ARCHIVE_DIR)
|
||||
.join("installed")
|
||||
.join(&id)
|
||||
.with_extension("s9pk");
|
||||
|
||||
crate::util::io::rename(&download_path, &installed_path).await?;
|
||||
|
||||
Ok::<_, Error>((installed_path, sync_progress_task))
|
||||
})
|
||||
.await?;
|
||||
Ok(reload_guard
|
||||
.handle_last(async move {
|
||||
let s9pk = S9pk::open(&installed_path, Some(&id)).await?;
|
||||
let prev = if let Some(service) = service.take() {
|
||||
ensure_code!(
|
||||
recovery_source.is_none(),
|
||||
ErrorKind::InvalidRequest,
|
||||
"cannot restore over existing package"
|
||||
);
|
||||
let version = service
|
||||
.seed
|
||||
.persistent_container
|
||||
.s9pk
|
||||
.as_manifest()
|
||||
.version
|
||||
.clone();
|
||||
service
|
||||
.uninstall(Some(s9pk.as_manifest().version.clone()))
|
||||
.await?;
|
||||
finalization_progress.complete();
|
||||
progress_handle.complete();
|
||||
Some(version)
|
||||
} else {
|
||||
None
|
||||
};
|
||||
if let Some(recovery_source) = recovery_source {
|
||||
*service = Some(
|
||||
Service::restore(
|
||||
ctx,
|
||||
s9pk,
|
||||
recovery_source,
|
||||
Some(InstallProgressHandles {
|
||||
finalization_progress,
|
||||
progress_handle,
|
||||
}),
|
||||
)
|
||||
.await?,
|
||||
);
|
||||
} else {
|
||||
*service = Some(
|
||||
Service::install(
|
||||
ctx,
|
||||
s9pk,
|
||||
prev,
|
||||
Some(InstallProgressHandles {
|
||||
finalization_progress,
|
||||
progress_handle,
|
||||
}),
|
||||
)
|
||||
.await?,
|
||||
);
|
||||
}
|
||||
sync_progress_task.await.map_err(|_| {
|
||||
Error::new(eyre!("progress sync task panicked"), ErrorKind::Unknown)
|
||||
})??;
|
||||
Ok(())
|
||||
})
|
||||
.boxed())
|
||||
}
|
||||
.boxed())
|
||||
}
|
||||
|
||||
/// This is ran during the cleanup, so when we are uninstalling the service
|
||||
#[instrument(skip_all)]
|
||||
pub async fn uninstall(&self, ctx: &RpcContext, id: &PackageId) -> Result<(), Error> {
|
||||
if let Some(service) = self.get_mut(id).await.take() {
|
||||
ServiceReloadGuard::new(ctx.clone(), id.clone(), "Uninstall")
|
||||
.handle_last(service.uninstall(None))
|
||||
.await?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub async fn shutdown_all(&self) -> Result<(), Error> {
|
||||
let lock = self.0.lock().await;
|
||||
let mut futs = Vec::with_capacity(lock.len());
|
||||
for service in lock.values().cloned() {
|
||||
futs.push(async move {
|
||||
if let Some(service) = service.write_owned().await.take() {
|
||||
service.shutdown().await?
|
||||
}
|
||||
Ok::<_, Error>(())
|
||||
});
|
||||
}
|
||||
drop(lock);
|
||||
let mut errors = ErrorCollection::new();
|
||||
for res in futures::future::join_all(futs).await {
|
||||
errors.handle(res);
|
||||
}
|
||||
errors.into_result()
|
||||
}
|
||||
}
|
||||
|
||||
pub struct ServiceReloadGuard(Option<ServiceReloadInfo>);
|
||||
impl Drop for ServiceReloadGuard {
|
||||
fn drop(&mut self) {
|
||||
if let Some(info) = self.0.take() {
|
||||
tokio::spawn(info.reload(None));
|
||||
}
|
||||
}
|
||||
}
|
||||
impl ServiceReloadGuard {
|
||||
pub fn new(ctx: RpcContext, id: PackageId, operation: &'static str) -> Self {
|
||||
Self(Some(ServiceReloadInfo { ctx, id, operation }))
|
||||
}
|
||||
|
||||
pub async fn handle<T>(
|
||||
&mut self,
|
||||
operation: impl Future<Output = Result<T, Error>>,
|
||||
) -> Result<T, Error> {
|
||||
let mut errors = ErrorCollection::new();
|
||||
match operation.await {
|
||||
Ok(a) => Ok(a),
|
||||
Err(e) => {
|
||||
if let Some(info) = self.0.take() {
|
||||
errors.handle(info.reload(Some(e.clone_output())).await);
|
||||
}
|
||||
errors.handle::<(), _>(Err(e));
|
||||
errors.into_result().map(|_| unreachable!()) // TODO: there's gotta be a more elegant way?
|
||||
}
|
||||
}
|
||||
}
|
||||
pub async fn handle_last<T>(
|
||||
mut self,
|
||||
operation: impl Future<Output = Result<T, Error>>,
|
||||
) -> Result<T, Error> {
|
||||
let res = self.handle(operation).await;
|
||||
self.0.take();
|
||||
res
|
||||
}
|
||||
}
|
||||
|
||||
struct ServiceReloadInfo {
|
||||
ctx: RpcContext,
|
||||
id: PackageId,
|
||||
operation: &'static str,
|
||||
}
|
||||
impl ServiceReloadInfo {
|
||||
async fn reload(self, error: Option<Error>) -> Result<(), Error> {
|
||||
self.ctx
|
||||
.services
|
||||
.load(&self.ctx, &self.id, LoadDisposition::Undo)
|
||||
.await?;
|
||||
if let Some(error) = error {
|
||||
self.ctx
|
||||
.notification_manager
|
||||
.notify(
|
||||
self.ctx.db.clone(),
|
||||
Some(self.id.clone()),
|
||||
NotificationLevel::Error,
|
||||
format!("{} Failed", self.operation),
|
||||
error.to_string(),
|
||||
(),
|
||||
None,
|
||||
)
|
||||
.await?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
32
core/startos/src/service/start_stop.rs
Normal file
32
core/startos/src/service/start_stop.rs
Normal file
@@ -0,0 +1,32 @@
|
||||
use crate::status::MainStatus;
|
||||
|
||||
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
|
||||
pub enum StartStop {
|
||||
Start,
|
||||
Stop,
|
||||
}
|
||||
|
||||
impl StartStop {
|
||||
pub(crate) fn is_start(&self) -> bool {
|
||||
matches!(self, StartStop::Start)
|
||||
}
|
||||
}
|
||||
impl From<MainStatus> for StartStop {
|
||||
fn from(value: MainStatus) -> Self {
|
||||
match value {
|
||||
MainStatus::Stopped => StartStop::Stop,
|
||||
MainStatus::Restarting => StartStop::Start,
|
||||
MainStatus::Stopping { .. } => StartStop::Stop,
|
||||
MainStatus::Starting => StartStop::Start,
|
||||
MainStatus::Running {
|
||||
started: _,
|
||||
health: _,
|
||||
} => StartStop::Start,
|
||||
MainStatus::BackingUp { started, health: _ } if started.is_some() => StartStop::Start,
|
||||
MainStatus::BackingUp {
|
||||
started: _,
|
||||
health: _,
|
||||
} => StartStop::Stop,
|
||||
}
|
||||
}
|
||||
}
|
||||
1
core/startos/src/service/transition/backup.rs
Normal file
1
core/startos/src/service/transition/backup.rs
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
74
core/startos/src/service/transition/mod.rs
Normal file
74
core/startos/src/service/transition/mod.rs
Normal file
@@ -0,0 +1,74 @@
|
||||
use std::ops::Deref;
|
||||
use std::sync::Arc;
|
||||
|
||||
use futures::{Future, FutureExt};
|
||||
use tokio::sync::watch;
|
||||
|
||||
use crate::service::start_stop::StartStop;
|
||||
use crate::util::actor::BackgroundJobs;
|
||||
use crate::util::future::{CancellationHandle, RemoteCancellable};
|
||||
|
||||
pub mod backup;
|
||||
pub mod restart;
|
||||
|
||||
#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum TransitionKind {
|
||||
BackingUp,
|
||||
Restarting,
|
||||
}
|
||||
|
||||
/// Used only in the manager/mod and is used to keep track of the state of the manager during the
|
||||
/// transitional states
|
||||
pub struct TransitionState {
|
||||
cancel_handle: CancellationHandle,
|
||||
kind: TransitionKind,
|
||||
}
|
||||
|
||||
impl TransitionState {
|
||||
pub fn kind(&self) -> TransitionKind {
|
||||
self.kind
|
||||
}
|
||||
pub async fn abort(mut self) {
|
||||
self.cancel_handle.cancel_and_wait().await
|
||||
}
|
||||
fn new(
|
||||
task: impl Future<Output = ()> + Send + 'static,
|
||||
kind: TransitionKind,
|
||||
jobs: &mut BackgroundJobs,
|
||||
) -> Self {
|
||||
let task = RemoteCancellable::new(task);
|
||||
let cancel_handle = task.cancellation_handle();
|
||||
jobs.add_job(task.map(|_| ()));
|
||||
Self {
|
||||
cancel_handle,
|
||||
kind,
|
||||
}
|
||||
}
|
||||
}
|
||||
impl Drop for TransitionState {
|
||||
fn drop(&mut self) {
|
||||
self.cancel_handle.cancel();
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct TempDesiredState(pub(super) Arc<watch::Sender<Option<StartStop>>>);
|
||||
impl TempDesiredState {
|
||||
pub fn stop(&self) {
|
||||
self.0.send_replace(Some(StartStop::Stop));
|
||||
}
|
||||
pub fn start(&self) {
|
||||
self.0.send_replace(Some(StartStop::Start));
|
||||
}
|
||||
}
|
||||
impl Drop for TempDesiredState {
|
||||
fn drop(&mut self) {
|
||||
self.0.send_replace(None);
|
||||
}
|
||||
}
|
||||
impl Deref for TempDesiredState {
|
||||
type Target = watch::Sender<Option<StartStop>>;
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&*self.0
|
||||
}
|
||||
}
|
||||
39
core/startos/src/service/transition/restart.rs
Normal file
39
core/startos/src/service/transition/restart.rs
Normal file
@@ -0,0 +1,39 @@
|
||||
use futures::FutureExt;
|
||||
|
||||
use crate::prelude::*;
|
||||
use crate::service::start_stop::StartStop;
|
||||
use crate::service::transition::{TransitionKind, TransitionState};
|
||||
use crate::service::{Service, ServiceActor};
|
||||
use crate::util::actor::{BackgroundJobs, Handler};
|
||||
use crate::util::future::RemoteCancellable;
|
||||
|
||||
struct Restart;
|
||||
#[async_trait::async_trait]
|
||||
impl Handler<Restart> for ServiceActor {
|
||||
type Response = ();
|
||||
async fn handle(&mut self, _: Restart, jobs: &mut BackgroundJobs) -> Self::Response {
|
||||
let temp = self.0.temp_desired_state.clone();
|
||||
let mut current = self.0.persistent_container.current_state.subscribe();
|
||||
let transition = RemoteCancellable::new(async move {
|
||||
temp.stop();
|
||||
current.wait_for(|s| *s == StartStop::Stop).await;
|
||||
temp.start();
|
||||
current.wait_for(|s| *s == StartStop::Start).await;
|
||||
});
|
||||
let cancel_handle = transition.cancellation_handle();
|
||||
jobs.add_job(transition.map(|_| ()));
|
||||
let notified = self.0.synchronized.notified();
|
||||
if let Some(t) = self.0.transition_state.send_replace(Some(TransitionState {
|
||||
kind: TransitionKind::Restarting,
|
||||
cancel_handle,
|
||||
})) {
|
||||
t.abort().await;
|
||||
}
|
||||
notified.await
|
||||
}
|
||||
}
|
||||
impl Service {
|
||||
pub async fn restart(&self) -> Result<(), Error> {
|
||||
self.actor.send(Restart).await
|
||||
}
|
||||
}
|
||||
14
core/startos/src/service/util.rs
Normal file
14
core/startos/src/service/util.rs
Normal file
@@ -0,0 +1,14 @@
|
||||
use futures::Future;
|
||||
use tokio::sync::Notify;
|
||||
|
||||
use crate::prelude::*;
|
||||
|
||||
pub async fn cancellable<T>(
|
||||
cancel_transition: &Notify,
|
||||
transition: impl Future<Output = T>,
|
||||
) -> Result<T, Error> {
|
||||
tokio::select! {
|
||||
a = transition => Ok(a),
|
||||
_ = cancel_transition.notified() => Err(Error::new(eyre!("transition was cancelled"), ErrorKind::Cancelled)),
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user