Feat/js long running (#1879)

* wip: combining the streams

* chore: Testing locally

* chore: Fix some lint

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* wip: making the mananger create

* wip: Working on trying to make the long running docker container command

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* feat: Use the long running feature in the manager

* remove recovered services and drop reordering feature (#1829)

* wip: Need to get the initial docker command running?

* chore: Add in the new procedure for the docker.

* feat: Get the system to finally run long

* wip: Added the command inserter to the docker persistance

* wip: Added the command inserter to the docker persistance

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* remove recovered services and drop reordering feature (#1829)

* chore: Convert the migration to use receipt. (#1842)

* feat: remove ionic storage (#1839)

* feat: remove ionic storage

* grayscal when disconncted, rename local storage service for clarity

* remove storage from package lock

* update patchDB

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

* update patchDB

* feat: Move the run_command into the js

* Feat/long running (#1676)

* feat: Start the long running container

* feat: Long running docker, running, stoping, and uninstalling

* feat: Just make the folders that we would like to mount.

* fix: Uninstall not working

* chore: remove some logging

* feat: Smarter cleanup

* feat: Wait for start

* wip: Need to kill

* chore: Remove the bad tracing

* feat: Stopping the long running processes without killing the long
running

* Mino Feat: Change the Manifest To have a new type (#1736)

* Add build-essential to README.md (#1716)

Update README.md

* write image to sparse-aware archive format (#1709)

* fix: Add modification to the max_user_watches (#1695)

* fix: Add modification to the max_user_watches

* chore: Move to initialization

* [Feat] follow logs (#1714)

* tail logs

* add cli

* add FE

* abstract http to shared

* batch new logs

* file download for logs

* fix modal error when no config

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: BluJ <mogulslayer@gmail.com>

* Update README.md (#1728)

* fix build for patch-db client for consistency (#1722)

* fix cli install (#1720)

* highlight instructions if not viewed (#1731)

* wip:

* [ ] Fix the build (dependencies:634 map for option)

* fix: Cargo build

* fix: Long running wasn't starting

* fix: uninstall works

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* chore: Fix a dbg!

* chore: Make the commands of the docker-inject do inject instead of exec

* chore: Fix compile mistake

* chore: Change to use simpler

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>

* remove recovered services and drop reordering feature (#1829)

* chore: Convert the migration to use receipt. (#1842)

* feat: remove ionic storage (#1839)

* feat: remove ionic storage

* grayscal when disconncted, rename local storage service for clarity

* remove storage from package lock

* update patchDB

Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>

* update patch DB

* chore: Change the error catching for the long running to try all

* Feat/community marketplace (#1790)

* add community marketplace

* Update embassy-mock-api.service.ts

* expect ui/marketplace to be undefined

* possible undefined from getpackage

* fix marketplace pages

* rework marketplace infrastructure

* fix bugs

Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>

* WIP: Fix the build, needed to move around creation of exec

* wip: Working on solving why there is a missing end.

* fix: make `shared` module independent of `config.js` (#1870)

* feat: Add in the kill and timeout

* feat: Get the run to actually work.

* chore: Add when/ why/ where comments

* feat: Convert inject main to use exec main.

* Fix: Ability to stop services

* wip: long running js main

* feat: Kill for the main

* Fix

* fix: Fix the build for x86

* wip: Working on changes

* wip: Working on trying to kill js

* fix: Testing for slow

* feat: Test that the new manifest works

* chore: Try and fix build?

* chore: Fix? the build

* chore: Fix the long input dies and never restarts

* build improvements

* no workdir

* fix: Architecture for long running

* chore: Fix and remove the docker inject

* chore: Undo the changes to the kiosk mode

* fix: Remove the it from the prod build

* fix: Start issue

* fix: The compat build

* chore: Add in the conditional compilation again for the missing impl

* chore: Change to aux

* chore: Remove the aux for now

* chore: Add some documentation to docker container

Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com>
Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com>
Co-authored-by: Aiden McClelland <me@drbonez.dev>
Co-authored-by: Matt Hill <matthewonthemoon@gmail.com>
Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com>
Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com>
Co-authored-by: Alex Inkin <alexander@inkin.ru>
This commit is contained in:
J M
2022-10-25 17:18:49 -06:00
committed by Aiden McClelland
parent 26d2152a36
commit 2642ec85e5
46 changed files with 2466 additions and 1050 deletions

View File

@@ -11,7 +11,7 @@ use tracing::instrument;
use crate::config::{Config, ConfigSpec};
use crate::context::RpcContext;
use crate::id::ImageId;
use crate::procedure::docker::DockerContainer;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
@@ -59,7 +59,7 @@ impl Action {
#[instrument]
pub fn validate(
&self,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
@@ -78,7 +78,6 @@ impl Action {
pub async fn execute(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
pkg_id: &PackageId,
pkg_version: &Version,
action_id: &ActionId,
@@ -93,7 +92,6 @@ impl Action {
self.implementation
.execute(
ctx,
container,
pkg_id,
pkg_version,
ProcedureName::Action(action_id.clone()),
@@ -145,23 +143,10 @@ pub async fn action(
.await?
.to_owned();
let container = crate::db::DatabaseModel::new()
.package_data()
.idx_model(&pkg_id)
.and_then(|p| p.installed())
.expect(&mut db)
.await
.with_kind(crate::ErrorKind::NotFound)?
.manifest()
.container()
.get(&mut db, false)
.await?
.to_owned();
if let Some(action) = manifest.actions.0.get(&action_id) {
action
.execute(
&ctx,
&container,
&manifest.id,
&manifest.version,
&action_id,

View File

@@ -19,7 +19,7 @@ use crate::dependencies::reconfigure_dependents_with_live_pointers;
use crate::id::ImageId;
use crate::install::PKG_ARCHIVE_DIR;
use crate::net::interface::{InterfaceId, Interfaces};
use crate::procedure::docker::DockerContainer;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::serde::IoFormat;
@@ -74,7 +74,7 @@ pub struct BackupActions {
impl BackupActions {
pub fn validate(
&self,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
@@ -102,25 +102,12 @@ impl BackupActions {
let mut volumes = volumes.to_readonly();
volumes.insert(VolumeId::Backup, Volume::Backup { readonly: false });
let backup_dir = backup_dir(pkg_id);
let container = crate::db::DatabaseModel::new()
.package_data()
.idx_model(&pkg_id)
.and_then(|p| p.installed())
.expect(db)
.await
.with_kind(crate::ErrorKind::NotFound)?
.manifest()
.container()
.get(db, false)
.await?
.to_owned();
if tokio::fs::metadata(&backup_dir).await.is_err() {
tokio::fs::create_dir_all(&backup_dir).await?
}
self.create
.execute::<(), NoOutput>(
ctx,
&container,
pkg_id,
pkg_version,
ProcedureName::CreateBackup,
@@ -200,7 +187,6 @@ impl BackupActions {
#[instrument(skip(ctx, db, secrets))]
pub async fn restore<Ex, Db: DbHandle>(
&self,
container: &Option<DockerContainer>,
ctx: &RpcContext,
db: &mut Db,
secrets: &mut Ex,
@@ -217,7 +203,6 @@ impl BackupActions {
self.restore
.execute::<(), NoOutput>(
ctx,
container,
pkg_id,
pkg_version,
ProcedureName::RestoreBackup,

View File

@@ -10,7 +10,7 @@ use super::{Config, ConfigSpec};
use crate::context::RpcContext;
use crate::dependencies::Dependencies;
use crate::id::ImageId;
use crate::procedure::docker::DockerContainer;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::status::health_check::HealthCheckId;
@@ -34,7 +34,7 @@ impl ConfigActions {
#[instrument]
pub fn validate(
&self,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
@@ -51,7 +51,6 @@ impl ConfigActions {
pub async fn get(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
pkg_id: &PackageId,
pkg_version: &Version,
volumes: &Volumes,
@@ -59,7 +58,6 @@ impl ConfigActions {
self.get
.execute(
ctx,
container,
pkg_id,
pkg_version,
ProcedureName::GetConfig,
@@ -77,7 +75,6 @@ impl ConfigActions {
pub async fn set(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
pkg_id: &PackageId,
pkg_version: &Version,
dependencies: &Dependencies,
@@ -88,7 +85,6 @@ impl ConfigActions {
.set
.execute(
ctx,
container,
pkg_id,
pkg_version,
ProcedureName::SetConfig,

View File

@@ -21,7 +21,7 @@ use crate::dependencies::{
DependencyErrors, DependencyReceipt, TaggedDependencyError, TryHealReceipts,
};
use crate::install::cleanup::{remove_from_current_dependents_lists, UpdateDependencyReceipts};
use crate::procedure::docker::DockerContainer;
use crate::procedure::docker::DockerContainers;
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::util::display_none;
use crate::util::serde::{display_serializable, parse_stdin_deserializable, IoFormat};
@@ -168,7 +168,6 @@ pub struct ConfigGetReceipts {
manifest_volumes: LockReceipt<crate::volume::Volumes, ()>,
manifest_version: LockReceipt<crate::util::Version, ()>,
manifest_config: LockReceipt<Option<ConfigActions>, ()>,
docker_container: LockReceipt<DockerContainer, String>,
}
impl ConfigGetReceipts {
@@ -204,19 +203,11 @@ impl ConfigGetReceipts {
.map(|x| x.manifest().config())
.make_locker(LockType::Write)
.add_to_keys(locks);
let docker_container = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.and_then(|x| x.manifest().container())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
manifest_volumes: manifest_volumes.verify(skeleton_key)?,
manifest_version: manifest_version.verify(skeleton_key)?,
manifest_config: manifest_config.verify(skeleton_key)?,
docker_container: docker_container.verify(skeleton_key)?,
})
}
}
@@ -239,11 +230,9 @@ pub async fn get(
.await?
.ok_or_else(|| Error::new(eyre!("{} has no config", id), crate::ErrorKind::NotFound))?;
let container = receipts.docker_container.get(&mut db, &id).await?;
let volumes = receipts.manifest_volumes.get(&mut db).await?;
let version = receipts.manifest_version.get(&mut db).await?;
action.get(&ctx, &container, &id, &version, &volumes).await
action.get(&ctx, &id, &version, &volumes).await
}
#[command(
@@ -286,7 +275,7 @@ pub struct ConfigReceipts {
pub current_dependencies: LockReceipt<CurrentDependencies, String>,
dependency_errors: LockReceipt<DependencyErrors, String>,
manifest_dependencies_config: LockReceipt<DependencyConfig, (String, String)>,
docker_container: LockReceipt<DockerContainer, String>,
docker_containers: LockReceipt<DockerContainers, String>,
}
impl ConfigReceipts {
@@ -391,11 +380,11 @@ impl ConfigReceipts {
.and_then(|x| x.manifest().dependencies().star().config())
.make_locker(LockType::Write)
.add_to_keys(locks);
let docker_container = crate::db::DatabaseModel::new()
let docker_containers = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.and_then(|x| x.manifest().container())
.and_then(|x| x.manifest().containers())
.make_locker(LockType::Write)
.add_to_keys(locks);
@@ -417,7 +406,7 @@ impl ConfigReceipts {
current_dependencies: current_dependencies.verify(skeleton_key)?,
dependency_errors: dependency_errors.verify(skeleton_key)?,
manifest_dependencies_config: manifest_dependencies_config.verify(skeleton_key)?,
docker_container: docker_container.verify(skeleton_key)?,
docker_containers: docker_containers.verify(skeleton_key)?,
})
}
}
@@ -509,8 +498,6 @@ pub fn configure_rec<'a, Db: DbHandle>(
receipts: &'a ConfigReceipts,
) -> BoxFuture<'a, Result<(), Error>> {
async move {
let container = receipts.docker_container.get(db, &id).await?;
let container = &container;
// fetch data from db
let action = receipts
.config_actions
@@ -534,7 +521,7 @@ pub fn configure_rec<'a, Db: DbHandle>(
let ConfigRes {
config: old_config,
spec,
} = action.get(ctx, container, id, &version, &volumes).await?;
} = action.get(ctx, id, &version, &volumes).await?;
// determine new config to use
let mut config = if let Some(config) = config.or_else(|| old_config.clone()) {
@@ -602,15 +589,7 @@ pub fn configure_rec<'a, Db: DbHandle>(
let signal = if !dry_run {
// run config action
let res = action
.set(
ctx,
container,
id,
&version,
&dependencies,
&volumes,
&config,
)
.set(ctx, id, &version, &dependencies, &volumes, &config)
.await?;
// track dependencies with no pointers
@@ -702,7 +681,7 @@ pub fn configure_rec<'a, Db: DbHandle>(
.unwrap_or_default();
let next = Value::Object(config.clone());
for (dependent, dep_info) in dependents.0.iter().filter(|(dep_id, _)| dep_id != &id) {
let dependent_container = receipts.docker_container.get(db, &dependent).await?;
let dependent_container = receipts.docker_containers.get(db, &dependent).await?;
let dependent_container = &dependent_container;
// check if config passes dependent check
if let Some(cfg) = receipts

View File

@@ -25,7 +25,6 @@ use super::{Config, MatchError, NoMatchWithPath, TimeoutError, TypeOf};
use crate::config::ConfigurationError;
use crate::context::RpcContext;
use crate::net::interface::InterfaceId;
use crate::procedure::docker::DockerContainer;
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::Error;
@@ -1883,7 +1882,6 @@ pub struct ConfigPointerReceipts {
manifest_volumes: LockReceipt<crate::volume::Volumes, String>,
manifest_version: LockReceipt<crate::util::Version, String>,
config_actions: LockReceipt<super::action::ConfigActions, String>,
docker_container: LockReceipt<DockerContainer, String>,
}
impl ConfigPointerReceipts {
@@ -1920,20 +1918,12 @@ impl ConfigPointerReceipts {
.and_then(|x| x.manifest().config())
.make_locker(LockType::Read)
.add_to_keys(locks);
let docker_container = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.and_then(|x| x.manifest().container())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
Ok(Self {
interface_addresses_receipt: interface_addresses_receipt(skeleton_key)?,
manifest_volumes: manifest_volumes.verify(skeleton_key)?,
config_actions: config_actions.verify(skeleton_key)?,
manifest_version: manifest_version.verify(skeleton_key)?,
docker_container: docker_container.verify(skeleton_key)?,
})
}
}
@@ -1963,12 +1953,11 @@ impl ConfigPointer {
let version = receipts.manifest_version.get(db, id).await.ok().flatten();
let cfg_actions = receipts.config_actions.get(db, id).await.ok().flatten();
let volumes = receipts.manifest_volumes.get(db, id).await.ok().flatten();
let container = receipts.docker_container.get(db, id).await.ok().flatten();
if let (Some(version), Some(cfg_actions), Some(volumes)) =
(&version, &cfg_actions, &volumes)
{
let cfg_res = cfg_actions
.get(ctx, &container, &self.package_id, version, volumes)
.get(ctx, &self.package_id, version, volumes)
.await
.map_err(|e| ConfigurationError::SystemError(e))?;
if let Some(cfg) = cfg_res.config {

View File

@@ -19,7 +19,7 @@ use crate::config::spec::PackagePointerSpec;
use crate::config::{not_found, Config, ConfigReceipts, ConfigSpec};
use crate::context::RpcContext;
use crate::db::model::{CurrentDependencies, CurrentDependents, InstalledPackageDataEntry};
use crate::procedure::docker::DockerContainer;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::status::health_check::{HealthCheckId, HealthCheckResult};
@@ -64,7 +64,7 @@ pub struct TryHealReceipts {
manifest_version: LockReceipt<Version, String>,
current_dependencies: LockReceipt<CurrentDependencies, String>,
dependency_errors: LockReceipt<DependencyErrors, String>,
docker_container: LockReceipt<DockerContainer, String>,
docker_containers: LockReceipt<DockerContainers, String>,
}
impl TryHealReceipts {
@@ -112,11 +112,11 @@ impl TryHealReceipts {
.map(|x| x.status().dependency_errors())
.make_locker(LockType::Write)
.add_to_keys(locks);
let docker_container = crate::db::DatabaseModel::new()
let docker_containers = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.and_then(|x| x.manifest().container())
.and_then(|x| x.manifest().containers())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
@@ -126,7 +126,7 @@ impl TryHealReceipts {
current_dependencies: current_dependencies.verify(skeleton_key)?,
manifest: manifest.verify(skeleton_key)?,
dependency_errors: dependency_errors.verify(skeleton_key)?,
docker_container: docker_container.verify(skeleton_key)?,
docker_containers: docker_containers.verify(skeleton_key)?,
})
}
}
@@ -203,7 +203,7 @@ impl DependencyError {
receipts: &'a TryHealReceipts,
) -> BoxFuture<'a, Result<Option<Self>, Error>> {
async move {
let container = receipts.docker_container.get(db, id).await?;
let container = receipts.docker_containers.get(db, id).await?;
Ok(match self {
DependencyError::NotInstalled => {
if receipts.status.get(db, dependency).await?.is_some() {
@@ -251,7 +251,6 @@ impl DependencyError {
cfg_info
.get(
ctx,
&container,
dependency,
&dependency_manifest.version,
&dependency_manifest.volumes,
@@ -507,7 +506,7 @@ impl DependencyConfig {
pub async fn check(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
dependent_id: &PackageId,
dependent_version: &Version,
dependent_volumes: &Volumes,
@@ -532,7 +531,7 @@ impl DependencyConfig {
pub async fn auto_configure(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
dependent_id: &PackageId,
dependent_version: &Version,
dependent_volumes: &Volumes,
@@ -562,7 +561,7 @@ pub struct DependencyConfigReceipts {
dependency_config_action: LockReceipt<ConfigActions, ()>,
package_volumes: LockReceipt<Volumes, ()>,
package_version: LockReceipt<Version, ()>,
docker_container: LockReceipt<DockerContainer, String>,
docker_containers: LockReceipt<DockerContainers, String>,
}
impl DependencyConfigReceipts {
@@ -625,11 +624,11 @@ impl DependencyConfigReceipts {
.map(|x| x.manifest().version())
.make_locker(LockType::Write)
.add_to_keys(locks);
let docker_container = crate::db::DatabaseModel::new()
let docker_containers = crate::db::DatabaseModel::new()
.package_data()
.star()
.installed()
.and_then(|x| x.manifest().container())
.and_then(|x| x.manifest().containers())
.make_locker(LockType::Write)
.add_to_keys(locks);
move |skeleton_key| {
@@ -641,7 +640,7 @@ impl DependencyConfigReceipts {
dependency_config_action: dependency_config_action.verify(&skeleton_key)?,
package_volumes: package_volumes.verify(&skeleton_key)?,
package_version: package_version.verify(&skeleton_key)?,
docker_container: docker_container.verify(&skeleton_key)?,
docker_containers: docker_containers.verify(&skeleton_key)?,
})
}
}
@@ -716,8 +715,7 @@ pub async fn configure_logic(
let dependency_version = receipts.dependency_version.get(db).await?;
let dependency_volumes = receipts.dependency_volumes.get(db).await?;
let dependencies = receipts.dependencies.get(db).await?;
let dependency_docker_container = receipts.docker_container.get(db, &*dependency_id).await?;
let pkg_docker_container = receipts.docker_container.get(db, &*pkg_id).await?;
let pkg_docker_container = receipts.docker_containers.get(db, &*pkg_id).await?;
let dependency = dependencies
.0
@@ -750,7 +748,6 @@ pub async fn configure_logic(
} = dependency_config_action
.get(
&ctx,
&dependency_docker_container,
&dependency_id,
&dependency_version,
&dependency_volumes,

View File

@@ -1,10 +1,6 @@
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::{collections::HashMap, path::PathBuf, sync::Arc};
use bollard::image::ListImagesOptions;
use color_eyre::Report;
use futures::FutureExt;
use patch_db::{DbHandle, LockReceipt, LockTargetId, LockType, PatchDbHandle, Verifier};
use sqlx::{Executor, Postgres};
use tracing::instrument;
@@ -422,7 +418,7 @@ pub fn cleanup_folder(
Box::pin(async move {
let meta_data = match tokio::fs::metadata(&path).await {
Ok(a) => a,
Err(e) => {
Err(_e) => {
return;
}
};
@@ -441,7 +437,7 @@ pub fn cleanup_folder(
}
let mut read_dir = match tokio::fs::read_dir(&path).await {
Ok(a) => a,
Err(e) => {
Err(_e) => {
return;
}
};

View File

@@ -1318,7 +1318,6 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>(
.manifest
.migrations
.to(
&prev.manifest.container,
ctx,
version,
pkg_id,
@@ -1329,7 +1328,7 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>(
let migration = manifest
.migrations
.from(
&manifest.container,
&manifest.containers,
ctx,
&prev.manifest.version,
pkg_id,
@@ -1413,7 +1412,6 @@ pub async fn install_s9pk<R: AsyncRead + AsyncSeek + Unpin + Send + Sync>(
manifest
.backup
.restore(
&manifest.container,
ctx,
&mut tx,
&mut sql_tx,
@@ -1522,7 +1520,7 @@ async fn handle_recovered_package(
receipts: &ConfigReceipts,
) -> Result<(), Error> {
let configured = if let Some(migration) = manifest.migrations.from(
&manifest.container,
&manifest.containers,
ctx,
&recovered.version,
pkg_id,

View File

@@ -115,7 +115,7 @@ pub async fn check<Db: DbHandle>(
.health_checks
.check_all(
ctx,
&manifest.container,
&manifest.containers,
started,
id,
&manifest.version,

View File

@@ -10,15 +10,17 @@ use std::time::Duration;
use bollard::container::{KillContainerOptions, StopContainerOptions};
use chrono::Utc;
use color_eyre::eyre::eyre;
use embassy_container_init::{InputJsonRpc, RpcId};
use models::{ExecCommand, TermCommand};
use nix::sys::signal::Signal;
use num_enum::TryFromPrimitive;
use patch_db::DbHandle;
use sqlx::{Executor, Postgres};
use tokio::io::BufReader;
use tokio::sync::watch::error::RecvError;
use tokio::sync::watch::{channel, Receiver, Sender};
use tokio::sync::{Mutex, Notify, RwLock};
use tokio::task::JoinHandle;
use tokio::{sync::mpsc::UnboundedSender, task::JoinHandle};
use tokio_stream::wrappers::UnboundedReceiverStream;
use torut::onion::TorSecretKeyV3;
use tracing::instrument;
@@ -27,7 +29,9 @@ use crate::manager::sync::synchronizer;
use crate::net::interface::InterfaceId;
use crate::net::GeneratedCertificateMountPoint;
use crate::notifications::NotificationLevel;
use crate::procedure::docker::{DockerContainer, DockerInject, DockerProcedure};
use crate::procedure::docker::{DockerContainer, DockerProcedure, LongRunning};
#[cfg(feature = "js_engine")]
use crate::procedure::js_scripts::JsProcedure;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::{Manifest, PackageId};
use crate::status::MainStatus;
@@ -191,14 +195,15 @@ async fn run_main(
let generated_certificate = generate_certificate(state, &interfaces).await?;
persistant.wait_for_persistant().await;
let is_injectable_main = check_is_injectable_main(&state);
let mut runtime = match is_injectable_main {
true => {
tokio::spawn(
async move { start_up_inject_image(rt_state, generated_certificate).await },
)
let is_injectable_main = check_is_injectable_main(state);
let mut runtime = match injectable_main(state) {
InjectableMain::None => {
tokio::spawn(async move { start_up_image(rt_state, generated_certificate).await })
}
#[cfg(feature = "js_engine")]
InjectableMain::Script(_) => {
tokio::spawn(async move { start_up_image(rt_state, generated_certificate).await })
}
false => tokio::spawn(async move { start_up_image(rt_state, generated_certificate).await }),
};
let ip = match is_injectable_main {
false => Some(match get_running_ip(state, &mut runtime).await {
@@ -219,6 +224,7 @@ async fn run_main(
let res = tokio::select! {
a = runtime => a.map_err(|_| Error::new(eyre!("Manager runtime panicked!"), crate::ErrorKind::Docker)).and_then(|a| a),
_ = health => Err(Error::new(eyre!("Health check daemon exited!"), crate::ErrorKind::Unknown)),
};
if let Some(ip) = ip {
remove_network_for_main(state, ip).await?;
@@ -237,29 +243,6 @@ async fn start_up_image(
.main
.execute::<(), NoOutput>(
&rt_state.ctx,
&rt_state.manifest.container,
&rt_state.manifest.id,
&rt_state.manifest.version,
ProcedureName::Main,
&rt_state.manifest.volumes,
None,
None,
)
.await
}
/// We want to start up the manifest, but in this case we want to know that we have generated the certificates.
/// Note for _generated_certificate: Needed to know that before we start the state we have generated the certificate
async fn start_up_inject_image(
rt_state: Arc<ManagerSharedState>,
_generated_certificate: GeneratedCertificateMountPoint,
) -> Result<Result<NoOutput, (i32, String)>, Error> {
rt_state
.manifest
.main
.inject::<(), NoOutput>(
&rt_state.ctx,
&rt_state.manifest.container,
&rt_state.manifest.id,
&rt_state.manifest.version,
ProcedureName::Main,
@@ -295,8 +278,8 @@ impl Manager {
let managers_persistant = persistant_container.clone();
let thread = tokio::spawn(async move {
tokio::select! {
_ = manager_thread_loop(recv, &thread_shared, managers_persistant) => (),
_ = synchronizer(&*thread_shared) => (),
_ = manager_thread_loop(recv, &thread_shared, managers_persistant.clone()) => (),
_ = synchronizer(&*thread_shared, managers_persistant) => (),
}
});
Ok(Manager {
@@ -348,16 +331,21 @@ impl Manager {
.commit_health_check_results
.store(false, Ordering::SeqCst);
let _ = self.shared.on_stop.send(OnStop::Exit);
let sigterm_timeout: Option<crate::util::serde::Duration> = match &self.shared.manifest.main
let sigterm_timeout: Option<crate::util::serde::Duration> = match self
.shared
.manifest
.containers
.as_ref()
.map(|x| x.main.sigterm_timeout)
{
PackageProcedure::Docker(DockerProcedure {
sigterm_timeout, ..
})
| PackageProcedure::DockerInject(DockerInject {
sigterm_timeout, ..
}) => sigterm_timeout.clone(),
#[cfg(feature = "js_engine")]
PackageProcedure::Script(_) => return Ok(()),
Some(a) => a,
None => match &self.shared.manifest.main {
PackageProcedure::Docker(DockerProcedure {
sigterm_timeout, ..
}) => *sigterm_timeout,
#[cfg(feature = "js_engine")]
PackageProcedure::Script(_) => return Ok(()),
},
};
self.persistant_container.stop().await;
@@ -392,7 +380,11 @@ impl Manager {
a => a?,
};
} else {
stop_non_first(&*self.shared.container_name).await;
stop_long_running_processes(
&*self.shared.container_name,
self.persistant_container.command_inserter.clone(),
)
.await;
}
self.shared.status.store(
@@ -414,6 +406,13 @@ impl Manager {
self.shared.synchronize_now.notify_waiters();
self.shared.synchronized.notified().await
}
pub fn exec_command(&self) -> ExecCommand {
self.persistant_container.exec_command()
}
pub fn term_command(&self) -> TermCommand {
self.persistant_container.term_command()
}
}
async fn manager_thread_loop(
@@ -460,7 +459,7 @@ async fn manager_thread_loop(
);
}
}
match run_main(&thread_shared, persistant_container.clone()).await {
match run_main(thread_shared, persistant_container.clone()).await {
Ok(Ok(NoOutput)) => (), // restart
Ok(Err(e)) => {
let mut db = thread_shared.ctx.db.handle();
@@ -489,12 +488,9 @@ async fn manager_thread_loop(
Some(3600) // 1 hour
)
.await;
match res {
Err(e) => {
tracing::error!("Failed to issue notification: {}", e);
tracing::debug!("{:?}", e);
}
Ok(()) => {}
if let Err(e) = res {
tracing::error!("Failed to issue notification: {}", e);
tracing::debug!("{:?}", e);
}
}
_ => tracing::error!("service just started. not issuing crash notification"),
@@ -510,12 +506,121 @@ async fn manager_thread_loop(
}
}
struct PersistantContainer {
struct LongRunningHandle(NonDetachingJoinHandle<()>);
pub struct CommandInserter {
command_counter: AtomicUsize,
input: UnboundedSender<InputJsonRpc>,
outputs: Arc<Mutex<BTreeMap<RpcId, UnboundedSender<embassy_container_init::Output>>>>,
}
impl Drop for CommandInserter {
fn drop(&mut self) {
use embassy_container_init::{Input, JsonRpc};
let CommandInserter {
command_counter,
input,
outputs: _,
} = self;
let upper: usize = command_counter.load(Ordering::Relaxed);
for i in 0..upper {
let _ignored_result = input.send(JsonRpc::new(RpcId::UInt(i as u32), Input::Term()));
}
}
}
impl CommandInserter {
fn new(
long_running: LongRunning,
input: UnboundedSender<InputJsonRpc>,
) -> (Self, LongRunningHandle) {
let LongRunning {
mut output,
running_output,
} = long_running;
let command_counter = AtomicUsize::new(0);
let outputs: Arc<Mutex<BTreeMap<RpcId, UnboundedSender<embassy_container_init::Output>>>> =
Default::default();
let handle = LongRunningHandle(running_output);
tokio::spawn({
let outputs = outputs.clone();
async move {
while let Some(output) = output.recv().await {
let (id, output) = output.into_pair();
let mut outputs = outputs.lock().await;
let output_sender = outputs.get_mut(&id);
if let Some(output_sender) = output_sender {
if let Err(err) = output_sender.send(output) {
tracing::warn!("Could no longer send an output");
tracing::debug!("{err:?}");
outputs.remove(&id);
}
}
}
}
});
(
Self {
command_counter,
input,
outputs,
},
handle,
)
}
pub async fn exec_command(
&self,
command: String,
args: Vec<String>,
sender: UnboundedSender<embassy_container_init::Output>,
timeout: Option<Duration>,
) -> Option<RpcId> {
use embassy_container_init::{Input, JsonRpc};
let mut outputs = self.outputs.lock().await;
let command_counter = self.command_counter.fetch_add(1, Ordering::SeqCst) as u32;
let command_id = RpcId::UInt(command_counter);
outputs.insert(command_id.clone(), sender);
if let Some(timeout) = timeout {
tokio::spawn({
let input = self.input.clone();
let command_id = command_id.clone();
async move {
tokio::time::sleep(timeout).await;
let _ignored_output = input.send(JsonRpc::new(command_id, Input::Kill()));
}
});
}
if let Err(err) = self.input.send(JsonRpc::new(
command_id.clone(),
Input::Command { command, args },
)) {
tracing::warn!("Trying to send to input but can't");
tracing::debug!("{err:?}");
return None;
}
Some(command_id)
}
pub async fn term(&self, id: RpcId) {
use embassy_container_init::{Input, JsonRpc};
self.outputs.lock().await.remove(&id);
let _ignored_term = self.input.send(JsonRpc::new(id, Input::Term()));
}
pub async fn term_all(&self) {
for i in 0..self.command_counter.load(Ordering::Relaxed) {
self.term(RpcId::UInt(i as u32)).await;
}
}
}
type RunningDocker =
Arc<Mutex<Option<NonDetachingJoinHandle<Result<Result<NoOutput, (i32, String)>, Error>>>>>;
pub struct PersistantContainer {
container_name: String,
running_docker:
Arc<Mutex<Option<NonDetachingJoinHandle<Result<Result<NoOutput, (i32, String)>, Error>>>>>,
running_docker: RunningDocker,
should_stop_running: Arc<std::sync::atomic::AtomicBool>,
wait_for_start: (Sender<bool>, Receiver<bool>),
command_inserter: Arc<Mutex<Option<CommandInserter>>>,
}
impl PersistantContainer {
@@ -526,7 +631,8 @@ impl PersistantContainer {
container_name: thread_shared.container_name.clone(),
running_docker: Arc::new(Mutex::new(None)),
should_stop_running: Arc::new(AtomicBool::new(false)),
wait_for_start: wait_for_start,
wait_for_start,
command_inserter: Default::default(),
});
tokio::spawn(persistant_container(
thread_shared.clone(),
@@ -542,12 +648,7 @@ impl PersistantContainer {
*running_docker = None;
use tokio::process::Command;
if let Err(_err) = Command::new("docker")
.args(["stop", "-t", "0", &*container_name])
.output()
.await
{}
if let Err(_err) = Command::new("docker")
.args(["kill", &*container_name])
.args(["stop", "-t", "30", container_name])
.output()
.await
{}
@@ -569,22 +670,83 @@ impl PersistantContainer {
async fn done_waiting(&self) {
self.wait_for_start.0.send(false).unwrap();
}
fn term_command(&self) -> TermCommand {
let cloned = self.command_inserter.clone();
Arc::new(move |id| {
let cloned = cloned.clone();
Box::pin(async move {
let lock = cloned.lock().await;
let _id = match &*lock {
Some(command_inserter) => command_inserter.term(id).await,
None => {
return Err("Couldn't get a command inserter in current service".to_string())
}
};
Ok::<(), String>(())
})
})
}
fn exec_command(&self) -> ExecCommand {
let cloned = self.command_inserter.clone();
/// A handle that on drop will clean all the ids that are inserter in the fn.
struct Cleaner {
command_inserter: Arc<Mutex<Option<CommandInserter>>>,
ids: ::std::collections::BTreeSet<RpcId>,
}
impl Drop for Cleaner {
fn drop(&mut self) {
let command_inserter = self.command_inserter.clone();
let ids = ::std::mem::take(&mut self.ids);
tokio::spawn(async move {
let command_inserter_lock = command_inserter.lock().await;
let command_inserter = match &*command_inserter_lock {
Some(a) => a,
None => {
return;
}
};
for id in ids {
command_inserter.term(id).await;
}
});
}
}
let cleaner = Arc::new(Mutex::new(Cleaner {
command_inserter: cloned.clone(),
ids: Default::default(),
}));
Arc::new(move |command, args, sender, timeout| {
let cloned = cloned.clone();
let cleaner = cleaner.clone();
Box::pin(async move {
let lock = cloned.lock().await;
let id = match &*lock {
Some(command_inserter) => {
if let Some(id) = command_inserter
.exec_command(command.clone(), args.clone(), sender, timeout)
.await
{
let mut cleaner = cleaner.lock().await;
cleaner.ids.insert(id.clone());
id
} else {
return Err("Couldn't get command started ".to_string());
}
}
None => {
return Err("Couldn't get a command inserter in current service".to_string())
}
};
Ok::<RpcId, String>(id)
})
})
}
}
impl Drop for PersistantContainer {
fn drop(&mut self) {
self.should_stop_running.store(true, Ordering::SeqCst);
let container_name = self.container_name.clone();
let running_docker = self.running_docker.clone();
tokio::spawn(async move {
let mut running_docker = running_docker.lock().await;
*running_docker = None;
use std::process::Command;
if let Err(_err) = Command::new("docker")
.args(["kill", &*container_name])
.output()
{}
});
}
}
@@ -594,12 +756,15 @@ async fn persistant_container(
) {
let main_docker_procedure_for_long = injectable_main(&thread_shared);
match main_docker_procedure_for_long {
Some(main) => loop {
InjectableMain::None => futures::future::pending().await,
#[cfg(feature = "js_engine")]
InjectableMain::Script((container_inject, procedure)) => loop {
let main = DockerProcedure::main_docker_procedure_js(container_inject, procedure);
if container.should_stop_running.load(Ordering::SeqCst) {
return;
}
container.start_wait().await;
match run_persistant_container(&thread_shared, container.clone(), main.clone()).await {
match run_persistant_container(&thread_shared, container.clone(), main).await {
Ok(_) => (),
Err(e) => {
tracing::error!("failed to start persistant container: {}", e);
@@ -607,60 +772,50 @@ async fn persistant_container(
}
}
},
None => futures::future::pending().await,
}
}
fn injectable_main(thread_shared: &Arc<ManagerSharedState>) -> Option<Arc<DockerProcedure>> {
if let (
PackageProcedure::DockerInject(DockerInject {
system,
entrypoint,
args,
io_format,
sigterm_timeout,
}),
Some(DockerContainer {
image,
mounts,
shm_size_mb,
}),
) = (
#[cfg(not(feature = "js_engine"))]
enum InjectableMain {
None,
}
#[cfg(feature = "js_engine")]
enum InjectableMain<'a> {
None,
Script((&'a DockerContainer, &'a JsProcedure)),
}
fn injectable_main(thread_shared: &Arc<ManagerSharedState>) -> InjectableMain {
match (
&thread_shared.manifest.main,
&thread_shared.manifest.container,
&thread_shared.manifest.containers.as_ref().map(|x| &x.main),
) {
Some(Arc::new(DockerProcedure {
image: image.clone(),
mounts: mounts.clone(),
io_format: *io_format,
shm_size_mb: *shm_size_mb,
sigterm_timeout: *sigterm_timeout,
system: *system,
entrypoint: "sleep".to_string(),
args: vec!["infinity".to_string()],
}))
} else {
None
#[cfg(feature = "js_engine")]
(PackageProcedure::Script(inject), Some(container)) => {
InjectableMain::Script((container, inject))
}
_ => InjectableMain::None,
}
}
fn check_is_injectable_main(thread_shared: &ManagerSharedState) -> bool {
match &thread_shared.manifest.main {
PackageProcedure::Docker(_a) => false,
PackageProcedure::DockerInject(a) => true,
#[cfg(feature = "js_engine")]
PackageProcedure::Script(_) => false,
PackageProcedure::Script(_) => true,
}
}
async fn run_persistant_container(
state: &Arc<ManagerSharedState>,
persistant: Arc<PersistantContainer>,
docker_procedure: Arc<DockerProcedure>,
docker_procedure: DockerProcedure,
) -> Result<(), Error> {
let interfaces = states_main_interfaces(state)?;
let generated_certificate = generate_certificate(state, &interfaces).await?;
let mut runtime = tokio::spawn(long_running_docker(state.clone(), docker_procedure));
let mut runtime =
long_running_docker(state.clone(), docker_procedure, persistant.clone()).await?;
let ip = match get_running_ip(state, &mut runtime).await {
let ip = match get_long_running_ip(state, &mut runtime).await {
GetRunninIp::Ip(x) => x,
GetRunninIp::Error(e) => return Err(e),
GetRunninIp::EarlyExit(e) => {
@@ -674,7 +829,7 @@ async fn run_persistant_container(
fetch_starting_to_running(state);
let res = tokio::select! {
a = runtime => a.map_err(|_| Error::new(eyre!("Manager runtime panicked!"), crate::ErrorKind::Docker)).map(|_| ()),
a = runtime.0 => a.map_err(|_| Error::new(eyre!("Manager runtime panicked!"), crate::ErrorKind::Docker)).map(|_| ()),
};
remove_network_for_main(state, ip).await?;
res
@@ -682,19 +837,23 @@ async fn run_persistant_container(
async fn long_running_docker(
rt_state: Arc<ManagerSharedState>,
main_status: Arc<DockerProcedure>,
) -> Result<Result<NoOutput, (i32, String)>, Error> {
main_status
.execute::<(), NoOutput>(
main_status: DockerProcedure,
container: Arc<PersistantContainer>,
) -> Result<LongRunningHandle, Error> {
let (sender, receiver) = tokio::sync::mpsc::unbounded_channel();
let long_running = main_status
.long_running_execute(
&rt_state.ctx,
&rt_state.manifest.id,
&rt_state.manifest.version,
ProcedureName::LongRunning,
&rt_state.manifest.volumes,
None,
None,
UnboundedReceiverStream::new(receiver),
)
.await
.await?;
let (command_inserter, long_running_handle) = CommandInserter::new(long_running, sender);
*container.command_inserter.lock().await = Some(command_inserter);
Ok(long_running_handle)
}
async fn remove_network_for_main(
@@ -778,9 +937,11 @@ enum GetRunninIp {
EarlyExit(Result<NoOutput, (i32, String)>),
}
type RuntimeOfCommand = JoinHandle<Result<Result<NoOutput, (i32, String)>, Error>>;
async fn get_running_ip(
state: &Arc<ManagerSharedState>,
mut runtime: &mut tokio::task::JoinHandle<Result<Result<NoOutput, (i32, String)>, Error>>,
mut runtime: &mut RuntimeOfCommand,
) -> GetRunninIp {
loop {
match container_inspect(state).await {
@@ -805,8 +966,8 @@ async fn get_running_ip(
}) => (),
Err(e) => return GetRunninIp::Error(e.into()),
}
match futures::poll!(&mut runtime) {
Poll::Ready(res) => match res {
if let Poll::Ready(res) = futures::poll!(&mut runtime) {
match res {
Ok(Ok(response)) => return GetRunninIp::EarlyExit(response),
Err(_) | Ok(Err(_)) => {
return GetRunninIp::Error(Error::new(
@@ -814,8 +975,48 @@ async fn get_running_ip(
crate::ErrorKind::Docker,
))
}
},
_ => (),
}
}
}
}
async fn get_long_running_ip(
state: &Arc<ManagerSharedState>,
runtime: &mut LongRunningHandle,
) -> GetRunninIp {
loop {
match container_inspect(state).await {
Ok(res) => {
match res
.network_settings
.and_then(|ns| ns.networks)
.and_then(|mut n| n.remove("start9"))
.and_then(|es| es.ip_address)
.filter(|ip| !ip.is_empty())
.map(|ip| ip.parse())
.transpose()
{
Ok(Some(ip_addr)) => return GetRunninIp::Ip(ip_addr),
Ok(None) => (),
Err(e) => return GetRunninIp::Error(e.into()),
}
}
Err(bollard::errors::Error::DockerResponseServerError {
status_code: 404, // NOT FOUND
..
}) => (),
Err(e) => return GetRunninIp::Error(e.into()),
}
if let Poll::Ready(res) = futures::poll!(&mut runtime.0) {
match res {
Ok(_) => return GetRunninIp::EarlyExit(Ok(NoOutput)),
Err(_e) => {
return GetRunninIp::Error(Error::new(
eyre!("Manager runtime panicked!"),
crate::ErrorKind::Docker,
))
}
}
}
}
}
@@ -838,11 +1039,11 @@ async fn generate_certificate(
TorSecretKeyV3,
)>,
) -> Result<GeneratedCertificateMountPoint, Error> {
Ok(state
state
.ctx
.net_controller
.generate_certificate_mountpoint(&state.manifest.id, interfaces)
.await?)
.await
}
fn states_main_interfaces(
@@ -855,7 +1056,7 @@ fn states_main_interfaces(
)>,
Error,
> {
Ok(state
state
.manifest
.interfaces
.0
@@ -873,11 +1074,14 @@ fn states_main_interfaces(
.clone(),
))
})
.collect::<Result<Vec<_>, Error>>()?)
.collect::<Result<Vec<_>, Error>>()
}
#[instrument(skip(shared))]
async fn stop(shared: &ManagerSharedState) -> Result<(), Error> {
#[instrument(skip(shared, persistant_container))]
async fn stop(
shared: &ManagerSharedState,
persistant_container: Arc<PersistantContainer>,
) -> Result<(), Error> {
shared
.commit_health_check_results
.store(false, Ordering::SeqCst);
@@ -896,9 +1100,6 @@ async fn stop(shared: &ManagerSharedState) -> Result<(), Error> {
match &shared.manifest.main {
PackageProcedure::Docker(DockerProcedure {
sigterm_timeout, ..
})
| PackageProcedure::DockerInject(DockerInject {
sigterm_timeout, ..
}) => {
if !check_is_injectable_main(shared) {
match shared
@@ -930,11 +1131,23 @@ async fn stop(shared: &ManagerSharedState) -> Result<(), Error> {
a => a?,
};
} else {
stop_non_first(&shared.container_name).await;
stop_long_running_processes(
&shared.container_name,
persistant_container.command_inserter.clone(),
)
.await;
}
}
#[cfg(feature = "js_engine")]
PackageProcedure::Script(_) => return Ok(()),
PackageProcedure::Script(_) => {
if check_is_injectable_main(shared) {
stop_long_running_processes(
&shared.container_name,
persistant_container.command_inserter.clone(),
)
.await;
}
}
};
tracing::debug!("Stopping a docker");
shared.status.store(
@@ -945,11 +1158,13 @@ async fn stop(shared: &ManagerSharedState) -> Result<(), Error> {
}
/// So the sleep infinity, which is the long running, is pid 1. So we kill the others
async fn stop_non_first(container_name: &str) {
// tracing::error!("BLUJ TODO: sudo docker exec {} sh -c \"ps ax | awk '\\$1 ~ /^[:0-9:]/ && \\$1 > 1 {{print \\$1}}' | xargs kill\"", container_name);
// (sleep infinity) & export RUNNING=$! && echo $! && (wait $RUNNING && echo "DONE FOR $RUNNING") &
// (RUNNING=$(sleep infinity & echo $!); echo "running $RUNNING"; wait $RUNNING; echo "DONE FOR ?") &
async fn stop_long_running_processes(
container_name: &str,
command_inserter: Arc<Mutex<Option<CommandInserter>>>,
) {
if let Some(command_inserter) = &*command_inserter.lock().await {
command_inserter.term_all().await;
}
let _ = tokio::process::Command::new("docker")
.args([
@@ -964,24 +1179,6 @@ async fn stop_non_first(container_name: &str) {
.await;
}
// #[test]
// fn test_stop_non_first() {
// assert_eq!(
// &format!(
// "{}",
// tokio::process::Command::new("docker").args([
// "container",
// "exec",
// "container_name",
// "sh",
// "-c",
// "ps ax | awk \"\\$1 ~ /^[:0-9:]/ && \\$1 > 1 {print \\$1}\"| xargs kill",
// ])
// ),
// ""
// );
// }
#[instrument(skip(shared))]
async fn start(shared: &ManagerSharedState) -> Result<(), Error> {
shared.on_stop.send(OnStop::Restart).map_err(|_| {
@@ -1002,8 +1199,11 @@ async fn start(shared: &ManagerSharedState) -> Result<(), Error> {
Ok(())
}
#[instrument(skip(shared))]
async fn pause(shared: &ManagerSharedState) -> Result<(), Error> {
#[instrument(skip(shared, persistant_container))]
async fn pause(
shared: &ManagerSharedState,
persistant_container: Arc<PersistantContainer>,
) -> Result<(), Error> {
if let Err(e) = shared
.ctx
.docker
@@ -1012,7 +1212,7 @@ async fn pause(shared: &ManagerSharedState) -> Result<(), Error> {
{
tracing::error!("failed to pause container. stopping instead. {}", e);
tracing::debug!("{:?}", e);
return stop(shared).await;
return stop(shared, persistant_container).await;
}
shared
.status

View File

@@ -1,16 +1,19 @@
use std::collections::BTreeMap;
use std::convert::TryInto;
use std::sync::atomic::Ordering;
use std::time::Duration;
use std::{collections::BTreeMap, sync::Arc};
use chrono::Utc;
use super::{pause, resume, start, stop, ManagerSharedState, Status};
use super::{pause, resume, start, stop, ManagerSharedState, PersistantContainer, Status};
use crate::status::MainStatus;
use crate::Error;
/// Allocates a db handle. DO NOT CALL with a db handle already in scope
async fn synchronize_once(shared: &ManagerSharedState) -> Result<Status, Error> {
async fn synchronize_once(
shared: &ManagerSharedState,
persistant_container: Arc<PersistantContainer>,
) -> Result<Status, Error> {
let mut db = shared.ctx.db.handle();
let mut status = crate::db::DatabaseModel::new()
.package_data()
@@ -45,16 +48,16 @@ async fn synchronize_once(shared: &ManagerSharedState) -> Result<Status, Error>
},
Status::Starting => match *status {
MainStatus::Stopped | MainStatus::Stopping | MainStatus::Restarting => {
stop(shared).await?;
stop(shared, persistant_container).await?;
}
MainStatus::Starting { .. } | MainStatus::Running { .. } => (),
MainStatus::BackingUp { .. } => {
pause(shared).await?;
pause(shared, persistant_container).await?;
}
},
Status::Running => match *status {
MainStatus::Stopped | MainStatus::Stopping | MainStatus::Restarting => {
stop(shared).await?;
stop(shared, persistant_container).await?;
}
MainStatus::Starting { .. } => {
*status = MainStatus::Running {
@@ -64,12 +67,12 @@ async fn synchronize_once(shared: &ManagerSharedState) -> Result<Status, Error>
}
MainStatus::Running { .. } => (),
MainStatus::BackingUp { .. } => {
pause(shared).await?;
pause(shared, persistant_container).await?;
}
},
Status::Paused => match *status {
MainStatus::Stopped | MainStatus::Stopping | MainStatus::Restarting => {
stop(shared).await?;
stop(shared, persistant_container).await?;
}
MainStatus::Starting { .. } | MainStatus::Running { .. } => {
resume(shared).await?;
@@ -82,13 +85,16 @@ async fn synchronize_once(shared: &ManagerSharedState) -> Result<Status, Error>
Ok(manager_status)
}
pub async fn synchronizer(shared: &ManagerSharedState) {
pub async fn synchronizer(
shared: &ManagerSharedState,
persistant_container: Arc<PersistantContainer>,
) {
loop {
tokio::select! {
_ = tokio::time::sleep(Duration::from_secs(5)) => (),
_ = shared.synchronize_now.notified() => (),
}
let status = match synchronize_once(shared).await {
let status = match synchronize_once(shared, persistant_container.clone()).await {
Err(e) => {
tracing::error!(
"Synchronizer for {}@{} failed: {}",

View File

@@ -1,11 +1,7 @@
use std::sync::Arc;
use aes::cipher::{CipherKey, NewCipher, Nonce, StreamCipher};
use aes::Aes256Ctr;
use futures::Stream;
use hmac::Hmac;
use josekit::jwk::Jwk;
use rpc_toolkit::hyper::{self, Body};
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use tracing::instrument;
@@ -113,6 +109,6 @@ fn test_gen_awk() {
}"#).unwrap();
assert_eq!(
"testing12345",
&encrypted.decrypt(Arc::new(private_key)).unwrap()
&encrypted.decrypt(std::sync::Arc::new(private_key)).unwrap()
);
}

View File

@@ -10,7 +10,7 @@ use tracing::instrument;
use crate::context::RpcContext;
use crate::id::ImageId;
use crate::procedure::docker::DockerContainer;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::Version;
@@ -27,7 +27,7 @@ impl Migrations {
#[instrument]
pub fn validate(
&self,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
@@ -58,7 +58,7 @@ impl Migrations {
#[instrument(skip(ctx))]
pub fn from<'a>(
&'a self,
container: &'a Option<DockerContainer>,
container: &'a Option<DockerContainers>,
ctx: &'a RpcContext,
version: &'a Version,
pkg_id: &'a PackageId,
@@ -70,11 +70,10 @@ impl Migrations {
.iter()
.find(|(range, _)| version.satisfies(*range))
{
Some(
Some(async move {
migration
.execute(
ctx,
container,
pkg_id,
pkg_version,
ProcedureName::Migration, // Migrations cannot be executed concurrently
@@ -88,8 +87,9 @@ impl Migrations {
Error::new(eyre!("{}", e.1), crate::ErrorKind::MigrationFailed)
})
})
}),
)
})
.await
})
} else {
None
}
@@ -98,7 +98,6 @@ impl Migrations {
#[instrument(skip(ctx))]
pub fn to<'a>(
&'a self,
container: &'a Option<DockerContainer>,
ctx: &'a RpcContext,
version: &'a Version,
pkg_id: &'a PackageId,
@@ -106,11 +105,10 @@ impl Migrations {
volumes: &'a Volumes,
) -> Option<impl Future<Output = Result<MigrationRes, Error>> + 'a> {
if let Some((_, migration)) = self.to.iter().find(|(range, _)| version.satisfies(*range)) {
Some(
Some(async move {
migration
.execute(
ctx,
container,
pkg_id,
pkg_version,
ProcedureName::Migration,
@@ -124,8 +122,9 @@ impl Migrations {
Error::new(eyre!("{}", e.1), crate::ErrorKind::MigrationFailed)
})
})
}),
)
})
.await
})
} else {
None
}

View File

@@ -9,14 +9,19 @@ use async_stream::stream;
use bollard::container::RemoveContainerOptions;
use color_eyre::eyre::eyre;
use color_eyre::Report;
use embassy_container_init::{InputJsonRpc, OutputJsonRpc};
use futures::future::Either as EitherFuture;
use futures::TryStreamExt;
use futures::{Stream, StreamExt, TryFutureExt, TryStreamExt};
use helpers::NonDetachingJoinHandle;
use nix::sys::signal;
use nix::unistd::Pid;
use serde::{Deserialize, Serialize};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use serde_json::Value;
use tokio::io::{AsyncBufRead, AsyncBufReadExt, BufReader};
use tokio::{
io::{AsyncBufRead, AsyncBufReadExt, BufReader},
process::Child,
sync::mpsc::UnboundedReceiver,
};
use tracing::instrument;
use super::ProcedureName;
@@ -41,6 +46,17 @@ lazy_static::lazy_static! {
};
}
#[derive(Clone, Debug, Deserialize, Serialize, patch_db::HasModel)]
#[serde(rename_all = "kebab-case")]
pub struct DockerContainers {
pub main: DockerContainer,
// #[serde(default)]
// pub aux: BTreeMap<String, DockerContainer>,
}
/// This is like the docker procedures of the past designs,
/// but this time all the entrypoints and args are not
/// part of this struct by choice. Used for the times that we are creating our own entry points
#[derive(Clone, Debug, Deserialize, Serialize, patch_db::HasModel)]
#[serde(rename_all = "kebab-case")]
pub struct DockerContainer {
@@ -49,6 +65,10 @@ pub struct DockerContainer {
pub mounts: BTreeMap<VolumeId, PathBuf>,
#[serde(default)]
pub shm_size_mb: Option<usize>, // TODO: use postfix sizing? like 1k vs 1m vs 1g
#[serde(default)]
pub sigterm_timeout: Option<SerdeDuration>,
#[serde(default)]
pub system: bool,
}
#[derive(Clone, Debug, Deserialize, Serialize)]
@@ -70,7 +90,7 @@ pub struct DockerProcedure {
pub shm_size_mb: Option<usize>, // TODO: use postfix sizing? like 1k vs 1m vs 1g
}
#[derive(Clone, Debug, Deserialize, Serialize)]
#[derive(Clone, Debug, Deserialize, Serialize, Default)]
#[serde(rename_all = "kebab-case")]
pub struct DockerInject {
#[serde(default)]
@@ -83,26 +103,42 @@ pub struct DockerInject {
#[serde(default)]
pub sigterm_timeout: Option<SerdeDuration>,
}
impl From<(&DockerContainer, &DockerInject)> for DockerProcedure {
fn from((container, injectable): (&DockerContainer, &DockerInject)) -> Self {
impl DockerProcedure {
pub fn main_docker_procedure(
container: &DockerContainer,
injectable: &DockerInject,
) -> DockerProcedure {
DockerProcedure {
image: container.image.clone(),
system: injectable.system.clone(),
system: injectable.system,
entrypoint: injectable.entrypoint.clone(),
args: injectable.args.clone(),
mounts: container.mounts.clone(),
io_format: injectable.io_format.clone(),
sigterm_timeout: injectable.sigterm_timeout.clone(),
shm_size_mb: container.shm_size_mb.clone(),
io_format: injectable.io_format,
sigterm_timeout: injectable.sigterm_timeout,
shm_size_mb: container.shm_size_mb,
}
}
#[cfg(feature = "js_engine")]
pub fn main_docker_procedure_js(
container: &DockerContainer,
_procedure: &super::js_scripts::JsProcedure,
) -> DockerProcedure {
DockerProcedure {
image: container.image.clone(),
system: container.system,
entrypoint: "sleep".to_string(),
args: Vec::new(),
mounts: container.mounts.clone(),
io_format: None,
sigterm_timeout: container.sigterm_timeout,
shm_size_mb: container.shm_size_mb,
}
}
}
impl DockerProcedure {
pub fn validate(
&self,
eos_version: &Version,
_eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
expected_io: bool,
@@ -116,10 +152,8 @@ impl DockerProcedure {
if !SYSTEM_IMAGES.contains(&self.image) {
color_eyre::eyre::bail!("unknown system image: {}", self.image);
}
} else {
if !image_ids.contains(&self.image) {
color_eyre::eyre::bail!("image for {} not contained in package", self.image);
}
} else if !image_ids.contains(&self.image) {
color_eyre::eyre::bail!("image for {} not contained in package", self.image);
}
if expected_io && self.io_format.is_none() {
color_eyre::eyre::bail!("expected io-format");
@@ -128,7 +162,7 @@ impl DockerProcedure {
}
#[instrument(skip(ctx, input))]
pub async fn execute<I: Serialize, O: for<'de> Deserialize<'de>>(
pub async fn execute<I: Serialize, O: DeserializeOwned>(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
@@ -217,7 +251,7 @@ impl DockerProcedure {
handle
.stdout
.take()
.ok_or_else(|| eyre!("Can't takeout stout"))
.ok_or_else(|| eyre!("Can't takeout stdout in execute"))
.with_kind(crate::ErrorKind::Docker)?,
);
let output = NonDetachingJoinHandle::from(tokio::spawn(async move {
@@ -307,25 +341,83 @@ impl DockerProcedure {
)
}
/// We created a new exec runner, where we are going to be passing the commands for it to run.
/// Idea is that we are going to send it command and get the inputs be filtered back from the manager.
/// Then we could in theory run commands without the cost of running the docker exec which is known to have
/// a dely of > 200ms which is not acceptable.
#[instrument(skip(ctx, input))]
pub async fn inject<I: Serialize, O: for<'de> Deserialize<'de>>(
pub async fn long_running_execute<S>(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
name: ProcedureName,
volumes: &Volumes,
input: S,
) -> Result<LongRunning, Error>
where
S: Stream<Item = InputJsonRpc> + Send + 'static,
{
let name = name.docker_name();
let name: Option<&str> = name.as_deref();
let container_name = Self::container_name(pkg_id, name);
let mut cmd = LongRunning::setup_long_running_docker_cmd(
self,
ctx,
&container_name,
volumes,
pkg_id,
pkg_version,
)
.await?;
let mut handle = cmd.spawn().with_kind(crate::ErrorKind::Docker)?;
let input_handle = LongRunning::spawn_input_handle(&mut handle, input)?
.map_err(|e| eyre!("Input Handle Error: {e:?}"));
let (output, output_handle) = LongRunning::spawn_output_handle(&mut handle)?;
let output_handle = output_handle.map_err(|e| eyre!("Output Handle Error: {e:?}"));
let err_handle = LongRunning::spawn_error_handle(&mut handle)?
.map_err(|e| eyre!("Err Handle Error: {e:?}"));
let running_output = NonDetachingJoinHandle::from(tokio::spawn(async move {
if let Err(err) = tokio::select!(
x = handle.wait().map_err(|e| eyre!("Runtime error: {e:?}")) => x.map(|_| ()),
x = err_handle => x.map(|_| ()),
x = output_handle => x.map(|_| ()),
x = input_handle => x.map(|_| ())
) {
tracing::debug!("{:?}", err);
tracing::error!("Join error");
}
}));
Ok(LongRunning {
output,
running_output,
})
}
#[instrument(skip(_ctx, input))]
pub async fn inject<I: Serialize, O: DeserializeOwned>(
&self,
_ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
name: ProcedureName,
volumes: &Volumes,
input: Option<I>,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error> {
let name = name.docker_name();
let name: Option<&str> = name.as_ref().map(|x| &**x);
let name: Option<&str> = name.as_deref();
let mut cmd = tokio::process::Command::new("docker");
tracing::debug!("{:?} is exec", name);
cmd.arg("exec");
cmd.args(self.docker_args_inject(ctx, pkg_id, pkg_version).await?);
cmd.args(self.docker_args_inject(pkg_id).await?);
let input_buf = if let (Some(input), Some(format)) = (&input, &self.io_format) {
cmd.stdin(std::process::Stdio::piped());
Some(format.to_vec(input)?)
@@ -372,7 +464,7 @@ impl DockerProcedure {
handle
.stdout
.take()
.ok_or_else(|| eyre!("Can't takeout stout"))
.ok_or_else(|| eyre!("Can't takeout stdout in inject"))
.with_kind(crate::ErrorKind::Docker)?,
);
let output = NonDetachingJoinHandle::from(tokio::spawn(async move {
@@ -463,7 +555,7 @@ impl DockerProcedure {
}
#[instrument(skip(ctx, input))]
pub async fn sandboxed<I: Serialize, O: for<'de> Deserialize<'de>>(
pub async fn sandboxed<I: Serialize, O: DeserializeOwned>(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
@@ -513,7 +605,7 @@ impl DockerProcedure {
handle
.stdout
.take()
.ok_or_else(|| eyre!("Can't takeout stout"))
.ok_or_else(|| eyre!("Can't takeout stdout in sandboxed"))
.with_kind(crate::ErrorKind::Docker)?,
);
let output = NonDetachingJoinHandle::from(tokio::spawn(async move {
@@ -607,7 +699,7 @@ impl DockerProcedure {
continue;
};
let src = volume.path_for(&ctx.datadir, pkg_id, pkg_version, volume_id);
if let Err(e) = tokio::fs::metadata(&src).await {
if let Err(_e) = tokio::fs::metadata(&src).await {
tokio::fs::create_dir_all(&src).await?;
}
res.push(OsStr::new("--mount").into());
@@ -626,7 +718,6 @@ impl DockerProcedure {
res.push(OsString::from(format!("{}m", shm_size_mb)).into());
}
res.push(OsStr::new("--interactive").into());
res.push(OsStr::new("--log-driver=journald").into());
res.push(OsStr::new("--entrypoint").into());
res.push(OsStr::new(&self.entrypoint).into());
@@ -649,12 +740,7 @@ impl DockerProcedure {
+ self.args.len(), // [ARG...]
)
}
async fn docker_args_inject(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
) -> Result<Vec<Cow<'_, OsStr>>, Error> {
async fn docker_args_inject(&self, pkg_id: &PackageId) -> Result<Vec<Cow<'_, OsStr>>, Error> {
let mut res = self.new_docker_args();
if let Some(shm_size_mb) = self.shm_size_mb {
res.push(OsStr::new("--shm-size").into());
@@ -693,6 +779,215 @@ impl<T> RingVec<T> {
}
}
/// This is created when we wanted a long running docker executor that we could send commands to and get the responses back.
/// We wanted a long running since we want to be able to have the equivelent to the docker execute without the heavy costs of 400 + ms time lag.
/// Also the long running let's us have the ability to start/ end the services quicker.
pub struct LongRunning {
pub output: UnboundedReceiver<OutputJsonRpc>,
pub running_output: NonDetachingJoinHandle<()>,
}
impl LongRunning {
async fn setup_long_running_docker_cmd(
docker: &DockerProcedure,
ctx: &RpcContext,
container_name: &str,
volumes: &Volumes,
pkg_id: &PackageId,
pkg_version: &Version,
) -> Result<tokio::process::Command, Error> {
tracing::error!("BLUJ setup_long_running_docker_cmd {container_name}");
const INIT_EXEC: &str = "/start9/embassy_container_init";
const BIND_LOCATION: &str = "/usr/lib/embassy/container";
tracing::trace!("setup_long_running_docker_cmd");
LongRunning::cleanup_previous_container(ctx, container_name).await?;
let image_architecture = {
let mut cmd = tokio::process::Command::new("docker");
cmd.arg("image")
.arg("inspect")
.arg("--format")
.arg("'{{.Architecture}}'");
if docker.system {
cmd.arg(docker.image.for_package(SYSTEM_PACKAGE_ID, None));
} else {
cmd.arg(docker.image.for_package(pkg_id, Some(pkg_version)));
}
let arch = String::from_utf8(cmd.output().await?.stdout)?;
arch.replace('\'', "").trim().to_string()
};
let mut cmd = tokio::process::Command::new("docker");
cmd.arg("run")
.arg("--network=start9")
.arg(format!("--add-host=embassy:{}", Ipv4Addr::from(HOST_IP)))
.arg("--mount")
.arg(format!("type=bind,src={BIND_LOCATION},dst=/start9"))
.arg("--name")
.arg(&container_name)
.arg(format!("--hostname={}", &container_name))
.arg("--entrypoint")
.arg(format!("{INIT_EXEC}.{image_architecture}"))
.arg("-i")
.arg("--rm");
for (volume_id, dst) in &docker.mounts {
let volume = if let Some(v) = volumes.get(volume_id) {
v
} else {
continue;
};
let src = volume.path_for(&ctx.datadir, pkg_id, pkg_version, volume_id);
if let Err(_e) = tokio::fs::metadata(&src).await {
tokio::fs::create_dir_all(&src).await?;
}
cmd.arg("--mount").arg(format!(
"type=bind,src={},dst={}{}",
src.display(),
dst.display(),
if volume.readonly() { ",readonly" } else { "" }
));
}
if let Some(shm_size_mb) = docker.shm_size_mb {
cmd.arg("--shm-size").arg(format!("{}m", shm_size_mb));
}
cmd.arg("--log-driver=journald");
if docker.system {
cmd.arg(docker.image.for_package(SYSTEM_PACKAGE_ID, None));
} else {
cmd.arg(docker.image.for_package(pkg_id, Some(pkg_version)));
}
cmd.stdout(std::process::Stdio::piped());
cmd.stderr(std::process::Stdio::piped());
cmd.stdin(std::process::Stdio::piped());
Ok(cmd)
}
async fn cleanup_previous_container(
ctx: &RpcContext,
container_name: &str,
) -> Result<(), Error> {
match ctx
.docker
.remove_container(
container_name,
Some(RemoveContainerOptions {
v: false,
force: true,
link: false,
}),
)
.await
{
Ok(())
| Err(bollard::errors::Error::DockerResponseServerError {
status_code: 404, // NOT FOUND
..
}) => Ok(()),
Err(e) => Err(e)?,
}
}
fn spawn_input_handle<S>(
handle: &mut Child,
input: S,
) -> Result<NonDetachingJoinHandle<()>, Error>
where
S: Stream<Item = InputJsonRpc> + Send + 'static,
{
use tokio::io::AsyncWriteExt;
let mut stdin = handle
.stdin
.take()
.ok_or_else(|| eyre!("Can't takeout stdin"))
.with_kind(crate::ErrorKind::Docker)?;
let handle = NonDetachingJoinHandle::from(tokio::spawn(async move {
let input = input;
tokio::pin!(input);
while let Some(input) = input.next().await {
let input = match serde_json::to_string(&input) {
Ok(a) => a,
Err(e) => {
tracing::debug!("{:?}", e);
tracing::error!("Docker Input Serialization issue");
continue;
}
};
if let Err(e) = stdin.write_all(format!("{input}\n").as_bytes()).await {
tracing::debug!("{:?}", e);
tracing::error!("Docker Input issue");
return;
}
}
}));
Ok(handle)
}
fn spawn_error_handle(handle: &mut Child) -> Result<NonDetachingJoinHandle<()>, Error> {
let id = handle.id();
let mut output = tokio::io::BufReader::new(
handle
.stderr
.take()
.ok_or_else(|| eyre!("Can't takeout stderr"))
.with_kind(crate::ErrorKind::Docker)?,
)
.lines();
Ok(NonDetachingJoinHandle::from(tokio::spawn(async move {
while let Ok(Some(line)) = output.next_line().await {
tracing::debug!("{:?}", id);
tracing::error!("Error from long running container");
tracing::error!("{}", line);
}
})))
}
fn spawn_output_handle(
handle: &mut Child,
) -> Result<(UnboundedReceiver<OutputJsonRpc>, NonDetachingJoinHandle<()>), Error> {
let mut output = tokio::io::BufReader::new(
handle
.stdout
.take()
.ok_or_else(|| eyre!("Can't takeout stdout for long running"))
.with_kind(crate::ErrorKind::Docker)?,
)
.lines();
let (sender, receiver) = tokio::sync::mpsc::unbounded_channel::<OutputJsonRpc>();
Ok((
receiver,
NonDetachingJoinHandle::from(tokio::spawn(async move {
loop {
let next = output.next_line().await;
let next = match next {
Ok(Some(a)) => a,
Ok(None) => {
tracing::error!("The docker pipe is closed?");
break;
}
Err(e) => {
tracing::debug!("{:?}", e);
tracing::error!("Output from docker, killing");
break;
}
};
let next = match serde_json::from_str(&next) {
Ok(a) => a,
Err(_e) => {
tracing::trace!("Could not decode output from long running binary");
continue;
}
};
if let Err(e) = sender.send(next) {
tracing::debug!("{:?}", e);
tracing::error!("Could no longer send output");
break;
}
}
})),
))
}
}
async fn buf_reader_to_lines(
reader: impl AsyncBufRead + Unpin,
limit: impl Into<Option<usize>>,
@@ -756,6 +1051,7 @@ async fn max_by_lines(
}
MaxByLines::Done(answer)
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -1,10 +1,12 @@
use std::path::{Path, PathBuf};
use std::sync::Arc;
use std::time::Duration;
pub use js_engine::JsError;
use js_engine::{JsExecutionEnvironment, PathForVolumeId};
use models::VolumeId;
use serde::{Deserialize, Serialize};
use models::{ExecCommand, TermCommand};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use tracing::instrument;
use super::ProcedureName;
@@ -52,8 +54,8 @@ impl JsProcedure {
Ok(())
}
#[instrument(skip(directory, input))]
pub async fn execute<I: Serialize, O: for<'de> Deserialize<'de>>(
#[instrument(skip(directory, input, exec_command, term_command))]
pub async fn execute<I: Serialize, O: DeserializeOwned>(
&self,
directory: &PathBuf,
pkg_id: &PackageId,
@@ -62,6 +64,8 @@ impl JsProcedure {
volumes: &Volumes,
input: Option<I>,
timeout: Option<Duration>,
exec_command: ExecCommand,
term_command: TermCommand,
) -> Result<Result<O, (i32, String)>, Error> {
Ok(async move {
let running_action = JsExecutionEnvironment::load_from_package(
@@ -69,6 +73,8 @@ impl JsProcedure {
pkg_id,
pkg_version,
Box::new(volumes.clone()),
exec_command,
term_command,
)
.await?
.run_action(name, input, self.args.clone());
@@ -86,7 +92,7 @@ impl JsProcedure {
}
#[instrument(skip(ctx, input))]
pub async fn sandboxed<I: Serialize, O: for<'de> Deserialize<'de>>(
pub async fn sandboxed<I: Serialize, O: DeserializeOwned>(
&self,
ctx: &RpcContext,
pkg_id: &PackageId,
@@ -102,6 +108,12 @@ impl JsProcedure {
pkg_id,
pkg_version,
Box::new(volumes.clone()),
Arc::new(|_, _, _, _| {
Box::pin(async { Err("Can't run commands in sandox mode".to_string()) })
}),
Arc::new(|_| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
)
.await?
.read_only_effects()
@@ -120,7 +132,7 @@ impl JsProcedure {
}
}
fn unwrap_known_error<O: for<'de> Deserialize<'de>>(
fn unwrap_known_error<O: DeserializeOwned>(
error_value: ErrorValue,
) -> Result<O, (JsError, String)> {
match error_value {
@@ -181,6 +193,10 @@ async fn js_action_execute() {
&volumes,
input,
timeout,
Arc::new(|_, _, _, _| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
Arc::new(|_| Box::pin(async move { Err("Can't run commands in test".to_string()) })),
)
.await
.unwrap()
@@ -236,6 +252,10 @@ async fn js_action_execute_error() {
&volumes,
input,
timeout,
Arc::new(|_, _, _, _| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
Arc::new(|_| Box::pin(async move { Err("Can't run commands in test".to_string()) })),
)
.await
.unwrap();
@@ -280,11 +300,70 @@ async fn js_action_fetch() {
&volumes,
input,
timeout,
Arc::new(|_, _, _, _| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
Arc::new(|_| Box::pin(async move { Err("Can't run commands in test".to_string()) })),
)
.await
.unwrap()
.unwrap();
}
#[tokio::test]
async fn js_test_slow() {
let js_action = JsProcedure { args: vec![] };
let path: PathBuf = "test/js_action_execute/"
.parse::<PathBuf>()
.unwrap()
.canonicalize()
.unwrap();
let package_id = "test-package".parse().unwrap();
let package_version: Version = "0.3.0.3".parse().unwrap();
let name = ProcedureName::Action("slow".parse().unwrap());
let volumes: Volumes = serde_json::from_value(serde_json::json!({
"main": {
"type": "data"
},
"compat": {
"type": "assets"
},
"filebrowser" :{
"package-id": "filebrowser",
"path": "data",
"readonly": true,
"type": "pointer",
"volume-id": "main",
}
}))
.unwrap();
let input: Option<serde_json::Value> = None;
let timeout = Some(Duration::from_secs(10));
tracing::debug!("testing start");
tokio::select! {
a = js_action
.execute::<serde_json::Value, serde_json::Value>(
&path,
&package_id,
&package_version,
name,
&volumes,
input,
timeout,
Arc::new(|_, _, _, _| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
Arc::new(|_| Box::pin(async move { Err("Can't run commands in test".to_string()) })),
)
=> {a
.unwrap()
.unwrap();},
_ = tokio::time::sleep(Duration::from_secs(1)) => ()
}
tracing::debug!("testing end should");
tokio::time::sleep(Duration::from_secs(2)).await;
tracing::debug!("Done");
}
#[tokio::test]
async fn js_action_var_arg() {
let js_action = JsProcedure {
@@ -325,6 +404,10 @@ async fn js_action_var_arg() {
&volumes,
input,
timeout,
Arc::new(|_, _, _, _| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
Arc::new(|_| Box::pin(async move { Err("Can't run commands in test".to_string()) })),
)
.await
.unwrap()
@@ -369,6 +452,10 @@ async fn js_action_test_rename() {
&volumes,
input,
timeout,
Arc::new(|_, _, _, _| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
Arc::new(|_| Box::pin(async move { Err("Can't run commands in test".to_string()) })),
)
.await
.unwrap()
@@ -413,6 +500,10 @@ async fn js_action_test_deep_dir() {
&volumes,
input,
timeout,
Arc::new(|_, _, _, _| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
Arc::new(|_| Box::pin(async move { Err("Can't run commands in test".to_string()) })),
)
.await
.unwrap()
@@ -456,6 +547,10 @@ async fn js_action_test_deep_dir_escape() {
&volumes,
input,
timeout,
Arc::new(|_, _, _, _| {
Box::pin(async move { Err("Can't run commands in test".to_string()) })
}),
Arc::new(|_| Box::pin(async move { Err("Can't run commands in test".to_string()) })),
)
.await
.unwrap()

View File

@@ -1,18 +1,18 @@
use std::collections::BTreeSet;
use std::time::Duration;
use color_eyre::eyre::{bail, eyre};
use color_eyre::eyre::eyre;
use patch_db::HasModel;
use serde::{Deserialize, Serialize};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use tracing::instrument;
use self::docker::{DockerContainer, DockerInject, DockerProcedure};
use self::docker::{DockerContainers, DockerProcedure};
use crate::context::RpcContext;
use crate::id::ImageId;
use crate::s9pk::manifest::PackageId;
use crate::util::Version;
use crate::volume::Volumes;
use crate::Error;
use crate::{Error, ErrorKind};
pub mod docker;
#[cfg(feature = "js_engine")]
@@ -26,7 +26,6 @@ pub use models::ProcedureName;
#[serde(tag = "type")]
pub enum PackageProcedure {
Docker(DockerProcedure),
DockerInject(DockerInject),
#[cfg(feature = "js_engine")]
Script(js_scripts::JsProcedure),
@@ -43,7 +42,7 @@ impl PackageProcedure {
#[instrument]
pub fn validate(
&self,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
@@ -53,25 +52,15 @@ impl PackageProcedure {
PackageProcedure::Docker(action) => {
action.validate(eos_version, volumes, image_ids, expected_io)
}
PackageProcedure::DockerInject(injectable) => {
let container = match container {
None => bail!("For the docker injectable procedure, a container must be exist on the config"),
Some(container) => container,
} ;
let docker_procedure: DockerProcedure = (container, injectable).into();
docker_procedure.validate(eos_version, volumes, image_ids, expected_io)
}
#[cfg(feature = "js_engine")]
PackageProcedure::Script(action) => action.validate(volumes),
}
}
#[instrument(skip(ctx, input, container))]
pub async fn execute<I: Serialize, O: for<'de> Deserialize<'de>>(
#[instrument(skip(ctx, input))]
pub async fn execute<I: Serialize, O: DeserializeOwned + 'static>(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
pkg_id: &PackageId,
pkg_version: &Version,
name: ProcedureName,
@@ -86,18 +75,36 @@ impl PackageProcedure {
.execute(ctx, pkg_id, pkg_version, name, volumes, input, timeout)
.await
}
PackageProcedure::DockerInject(injectable) => {
let container = match container {
None => return Err(Error::new(eyre!("For the docker injectable procedure, a container must be exist on the config"), crate::ErrorKind::Action)),
Some(container) => container,
} ;
let docker_procedure: DockerProcedure = (container, injectable).into();
docker_procedure
.inject(ctx, pkg_id, pkg_version, name, volumes, input, timeout)
.await
}
#[cfg(feature = "js_engine")]
PackageProcedure::Script(procedure) => {
let exec_command = match ctx
.managers
.get(&(pkg_id.clone(), pkg_version.clone()))
.await
{
None => {
return Err(Error::new(
eyre!("No manager found for {}", pkg_id),
ErrorKind::NotFound,
))
}
Some(x) => x,
}
.exec_command();
let term_command = match ctx
.managers
.get(&(pkg_id.clone(), pkg_version.clone()))
.await
{
None => {
return Err(Error::new(
eyre!("No manager found for {}", pkg_id),
ErrorKind::NotFound,
))
}
Some(x) => x,
}
.term_command();
procedure
.execute(
&ctx.datadir,
@@ -107,17 +114,18 @@ impl PackageProcedure {
volumes,
input,
timeout,
exec_command,
term_command,
)
.await
}
}
}
#[instrument(skip(ctx, input, container))]
pub async fn inject<I: Serialize, O: for<'de> Deserialize<'de>>(
#[instrument(skip(ctx, input))]
pub async fn inject<I: Serialize, O: DeserializeOwned + 'static>(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
pkg_id: &PackageId,
pkg_version: &Version,
name: ProcedureName,
@@ -125,25 +133,42 @@ impl PackageProcedure {
input: Option<I>,
timeout: Option<Duration>,
) -> Result<Result<O, (i32, String)>, Error> {
tracing::trace!("Procedure inject {} {} - {:?}", self, pkg_id, name);
match self {
PackageProcedure::Docker(procedure) => {
procedure
.inject(ctx, pkg_id, pkg_version, name, volumes, input, timeout)
.await
}
PackageProcedure::DockerInject(injectable) => {
let container = match container {
None => return Err(Error::new(eyre!("For the docker injectable procedure, a container must be exist on the config"), crate::ErrorKind::Action)),
Some(container) => container,
} ;
let docker_procedure: DockerProcedure = (container, injectable).into();
docker_procedure
.inject(ctx, pkg_id, pkg_version, name, volumes, input, timeout)
.await
}
#[cfg(feature = "js_engine")]
PackageProcedure::Script(procedure) => {
let exec_command = match ctx
.managers
.get(&(pkg_id.clone(), pkg_version.clone()))
.await
{
None => {
return Err(Error::new(
eyre!("No manager found for {}", pkg_id),
ErrorKind::NotFound,
))
}
Some(x) => x,
}
.exec_command();
let term_command = match ctx
.managers
.get(&(pkg_id.clone(), pkg_version.clone()))
.await
{
None => {
return Err(Error::new(
eyre!("No manager found for {}", pkg_id),
ErrorKind::NotFound,
))
}
Some(x) => x,
}
.term_command();
procedure
.execute(
&ctx.datadir,
@@ -153,15 +178,17 @@ impl PackageProcedure {
volumes,
input,
timeout,
exec_command,
term_command,
)
.await
}
}
}
#[instrument(skip(ctx, input))]
pub async fn sandboxed<I: Serialize, O: for<'de> Deserialize<'de>>(
pub async fn sandboxed<I: Serialize, O: DeserializeOwned>(
&self,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
ctx: &RpcContext,
pkg_id: &PackageId,
pkg_version: &Version,
@@ -177,16 +204,6 @@ impl PackageProcedure {
.sandboxed(ctx, pkg_id, pkg_version, volumes, input, timeout)
.await
}
PackageProcedure::DockerInject(injectable) => {
let container = match container {
None => return Err(Error::new(eyre!("For the docker injectable procedure, a container must be exist on the config"), crate::ErrorKind::Action)),
Some(container) => container,
} ;
let docker_procedure: DockerProcedure = (container, injectable).into();
docker_procedure
.sandboxed(ctx, pkg_id, pkg_version, volumes, input, timeout)
.await
}
#[cfg(feature = "js_engine")]
PackageProcedure::Script(procedure) => {
procedure
@@ -200,7 +217,6 @@ impl PackageProcedure {
impl std::fmt::Display for PackageProcedure {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
PackageProcedure::DockerInject(_) => write!(f, "Docker Injectable")?,
PackageProcedure::Docker(_) => write!(f, "Docker")?,
#[cfg(feature = "js_engine")]
PackageProcedure::Script(_) => write!(f, "JS")?,
@@ -208,6 +224,7 @@ impl std::fmt::Display for PackageProcedure {
Ok(())
}
}
#[derive(Debug)]
pub struct NoOutput;
impl<'de> Deserialize<'de> for NoOutput {

View File

@@ -21,6 +21,7 @@ pub async fn properties(#[context] ctx: RpcContext, #[arg] id: PackageId) -> Res
#[instrument(skip(ctx))]
pub async fn fetch_properties(ctx: RpcContext, id: PackageId) -> Result<Value, Error> {
let mut db = ctx.db.handle();
let manifest: Manifest = crate::db::DatabaseModel::new()
.package_data()
.idx_model(&id)
@@ -34,7 +35,6 @@ pub async fn fetch_properties(ctx: RpcContext, id: PackageId) -> Result<Value, E
props
.execute::<(), Value>(
&ctx,
&manifest.container,
&manifest.id,
&manifest.version,
ProcedureName::Properties,

View File

@@ -12,7 +12,7 @@ use crate::config::action::ConfigActions;
use crate::dependencies::Dependencies;
use crate::migration::Migrations;
use crate::net::interface::Interfaces;
use crate::procedure::docker::DockerContainer;
use crate::procedure::docker::DockerContainers;
use crate::procedure::PackageProcedure;
use crate::status::health_check::HealthChecks;
use crate::util::Version;
@@ -72,7 +72,7 @@ pub struct Manifest {
#[model]
pub dependencies: Dependencies,
#[model]
pub container: Option<DockerContainer>,
pub containers: Option<DockerContainers>,
}
impl Manifest {

View File

@@ -176,7 +176,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
}
let image_tags = self.image_tags().await?;
let man = self.manifest().await?;
let container = &man.container;
let containers = &man.containers;
let validated_image_ids = image_tags
.into_iter()
.map(|i| i.validate(&man.id, &man.version).map(|_| i.image_id))
@@ -187,7 +187,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
.iter()
.map(|(_, action)| {
action.validate(
container,
containers,
&man.eos_version,
&man.volumes,
&validated_image_ids,
@@ -195,21 +195,21 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
})
.collect::<Result<(), Error>>()?;
man.backup.validate(
container,
containers,
&man.eos_version,
&man.volumes,
&validated_image_ids,
)?;
if let Some(cfg) = &man.config {
cfg.validate(
container,
containers,
&man.eos_version,
&man.volumes,
&validated_image_ids,
)?;
}
man.health_checks.validate(
container,
containers,
&man.eos_version,
&man.volumes,
&validated_image_ids,
@@ -217,7 +217,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
man.interfaces.validate()?;
man.main
.validate(
container,
containers,
&man.eos_version,
&man.volumes,
&validated_image_ids,
@@ -225,7 +225,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
)
.with_ctx(|_| (crate::ErrorKind::ValidateS9pk, "Main"))?;
man.migrations.validate(
container,
containers,
&man.eos_version,
&man.volumes,
&validated_image_ids,
@@ -233,7 +233,7 @@ impl<R: AsyncRead + AsyncSeek + Unpin + Send + Sync> S9pkReader<R> {
if let Some(props) = &man.properties {
props
.validate(
container,
containers,
&man.eos_version,
&man.volumes,
&validated_image_ids,

View File

@@ -7,7 +7,7 @@ use tracing::instrument;
use crate::context::RpcContext;
use crate::id::ImageId;
use crate::procedure::docker::DockerContainer;
use crate::procedure::docker::DockerContainers;
use crate::procedure::{NoOutput, PackageProcedure, ProcedureName};
use crate::s9pk::manifest::PackageId;
use crate::util::serde::Duration;
@@ -21,7 +21,7 @@ impl HealthChecks {
#[instrument]
pub fn validate(
&self,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
eos_version: &Version,
volumes: &Volumes,
image_ids: &BTreeSet<ImageId>,
@@ -42,7 +42,7 @@ impl HealthChecks {
pub async fn check_all(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
started: DateTime<Utc>,
pkg_id: &PackageId,
pkg_version: &Version,
@@ -75,7 +75,7 @@ impl HealthCheck {
pub async fn check(
&self,
ctx: &RpcContext,
container: &Option<DockerContainer>,
container: &Option<DockerContainers>,
id: &HealthCheckId,
started: DateTime<Utc>,
pkg_id: &PackageId,
@@ -86,7 +86,6 @@ impl HealthCheck {
.implementation
.execute(
ctx,
container,
pkg_id,
pkg_version,
ProcedureName::Health(id.clone()),

View File

@@ -7,7 +7,7 @@ use std::pin::Pin;
use std::task::{Context, Poll};
use color_eyre::eyre::eyre;
use futures::{FutureExt, Stream};
use futures::Stream;
use http::header::{ACCEPT_RANGES, CONTENT_LENGTH, RANGE};
use hyper::body::Bytes;
use pin_project::pin_project;
@@ -26,29 +26,29 @@ pub struct HttpReader {
read_in_progress: ReadInProgress,
}
type InProgress = Pin<
Box<
dyn Future<
Output = Result<
Pin<
Box<
dyn Stream<Item = Result<Bytes, reqwest::Error>>
+ Send
+ Sync
+ 'static,
>,
>,
Error,
>,
> + Send
+ Sync
+ 'static,
>,
>;
enum ReadInProgress {
None,
InProgress(
Pin<
Box<
dyn Future<
Output = Result<
Pin<
Box<
dyn Stream<Item = Result<Bytes, reqwest::Error>>
+ Send
+ Sync
+ 'static,
>,
>,
Error,
>,
> + Send
+ Sync
+ 'static,
>,
>,
),
InProgress(InProgress),
Complete(Pin<Box<dyn Stream<Item = Result<Bytes, reqwest::Error>> + Send + Sync + 'static>>),
}
impl ReadInProgress {

View File

@@ -1,11 +1,6 @@
use std::path::Path;
use emver::VersionRange;
use tokio::process::Command;
use super::*;
use crate::disk::BOOT_RW_PATH;
use crate::util::Invoke;
const V0_3_0_1: emver::Version = emver::Version::new(0, 3, 0, 1);

View File

@@ -17,7 +17,7 @@ impl VersionT for Version {
fn compat(&self) -> &'static emver::VersionRange {
&*V0_3_0_COMPAT
}
async fn up<Db: DbHandle>(&self, db: &mut Db) -> Result<(), Error> {
async fn up<Db: DbHandle>(&self, _db: &mut Db) -> Result<(), Error> {
Ok(())
}
async fn down<Db: DbHandle>(&self, _db: &mut Db) -> Result<(), Error> {