mirror of
https://github.com/Start9Labs/start-os.git
synced 2026-03-26 02:11:53 +00:00
Feat/js long running (#1879)
* wip: combining the streams * chore: Testing locally * chore: Fix some lint * Feat/long running (#1676) * feat: Start the long running container * feat: Long running docker, running, stoping, and uninstalling * feat: Just make the folders that we would like to mount. * fix: Uninstall not working * chore: remove some logging * feat: Smarter cleanup * feat: Wait for start * wip: Need to kill * chore: Remove the bad tracing * feat: Stopping the long running processes without killing the long running * Mino Feat: Change the Manifest To have a new type (#1736) * Add build-essential to README.md (#1716) Update README.md * write image to sparse-aware archive format (#1709) * fix: Add modification to the max_user_watches (#1695) * fix: Add modification to the max_user_watches * chore: Move to initialization * [Feat] follow logs (#1714) * tail logs * add cli * add FE * abstract http to shared * batch new logs * file download for logs * fix modal error when no config Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: BluJ <mogulslayer@gmail.com> * Update README.md (#1728) * fix build for patch-db client for consistency (#1722) * fix cli install (#1720) * highlight instructions if not viewed (#1731) * wip: * [ ] Fix the build (dependencies:634 map for option) * fix: Cargo build * fix: Long running wasn't starting * fix: uninstall works Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> * chore: Fix a dbg! * chore: Make the commands of the docker-inject do inject instead of exec * chore: Fix compile mistake * chore: Change to use simpler Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> * wip: making the mananger create * wip: Working on trying to make the long running docker container command * Feat/long running (#1676) * feat: Start the long running container * feat: Long running docker, running, stoping, and uninstalling * feat: Just make the folders that we would like to mount. * fix: Uninstall not working * chore: remove some logging * feat: Smarter cleanup * feat: Wait for start * wip: Need to kill * chore: Remove the bad tracing * feat: Stopping the long running processes without killing the long running * Mino Feat: Change the Manifest To have a new type (#1736) * Add build-essential to README.md (#1716) Update README.md * write image to sparse-aware archive format (#1709) * fix: Add modification to the max_user_watches (#1695) * fix: Add modification to the max_user_watches * chore: Move to initialization * [Feat] follow logs (#1714) * tail logs * add cli * add FE * abstract http to shared * batch new logs * file download for logs * fix modal error when no config Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: BluJ <mogulslayer@gmail.com> * Update README.md (#1728) * fix build for patch-db client for consistency (#1722) * fix cli install (#1720) * highlight instructions if not viewed (#1731) * wip: * [ ] Fix the build (dependencies:634 map for option) * fix: Cargo build * fix: Long running wasn't starting * fix: uninstall works Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> * chore: Fix a dbg! * chore: Make the commands of the docker-inject do inject instead of exec * chore: Fix compile mistake * chore: Change to use simpler Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> * feat: Use the long running feature in the manager * remove recovered services and drop reordering feature (#1829) * wip: Need to get the initial docker command running? * chore: Add in the new procedure for the docker. * feat: Get the system to finally run long * wip: Added the command inserter to the docker persistance * wip: Added the command inserter to the docker persistance * Feat/long running (#1676) * feat: Start the long running container * feat: Long running docker, running, stoping, and uninstalling * feat: Just make the folders that we would like to mount. * fix: Uninstall not working * chore: remove some logging * feat: Smarter cleanup * feat: Wait for start * wip: Need to kill * chore: Remove the bad tracing * feat: Stopping the long running processes without killing the long running * Mino Feat: Change the Manifest To have a new type (#1736) * Add build-essential to README.md (#1716) Update README.md * write image to sparse-aware archive format (#1709) * fix: Add modification to the max_user_watches (#1695) * fix: Add modification to the max_user_watches * chore: Move to initialization * [Feat] follow logs (#1714) * tail logs * add cli * add FE * abstract http to shared * batch new logs * file download for logs * fix modal error when no config Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: BluJ <mogulslayer@gmail.com> * Update README.md (#1728) * fix build for patch-db client for consistency (#1722) * fix cli install (#1720) * highlight instructions if not viewed (#1731) * wip: * [ ] Fix the build (dependencies:634 map for option) * fix: Cargo build * fix: Long running wasn't starting * fix: uninstall works Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> * chore: Fix a dbg! * chore: Make the commands of the docker-inject do inject instead of exec * chore: Fix compile mistake * chore: Change to use simpler Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> * remove recovered services and drop reordering feature (#1829) * chore: Convert the migration to use receipt. (#1842) * feat: remove ionic storage (#1839) * feat: remove ionic storage * grayscal when disconncted, rename local storage service for clarity * remove storage from package lock * update patchDB Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> * update patchDB * feat: Move the run_command into the js * Feat/long running (#1676) * feat: Start the long running container * feat: Long running docker, running, stoping, and uninstalling * feat: Just make the folders that we would like to mount. * fix: Uninstall not working * chore: remove some logging * feat: Smarter cleanup * feat: Wait for start * wip: Need to kill * chore: Remove the bad tracing * feat: Stopping the long running processes without killing the long running * Mino Feat: Change the Manifest To have a new type (#1736) * Add build-essential to README.md (#1716) Update README.md * write image to sparse-aware archive format (#1709) * fix: Add modification to the max_user_watches (#1695) * fix: Add modification to the max_user_watches * chore: Move to initialization * [Feat] follow logs (#1714) * tail logs * add cli * add FE * abstract http to shared * batch new logs * file download for logs * fix modal error when no config Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: BluJ <mogulslayer@gmail.com> * Update README.md (#1728) * fix build for patch-db client for consistency (#1722) * fix cli install (#1720) * highlight instructions if not viewed (#1731) * wip: * [ ] Fix the build (dependencies:634 map for option) * fix: Cargo build * fix: Long running wasn't starting * fix: uninstall works Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> * chore: Fix a dbg! * chore: Make the commands of the docker-inject do inject instead of exec * chore: Fix compile mistake * chore: Change to use simpler Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> * remove recovered services and drop reordering feature (#1829) * chore: Convert the migration to use receipt. (#1842) * feat: remove ionic storage (#1839) * feat: remove ionic storage * grayscal when disconncted, rename local storage service for clarity * remove storage from package lock * update patchDB Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> * update patch DB * chore: Change the error catching for the long running to try all * Feat/community marketplace (#1790) * add community marketplace * Update embassy-mock-api.service.ts * expect ui/marketplace to be undefined * possible undefined from getpackage * fix marketplace pages * rework marketplace infrastructure * fix bugs Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> * WIP: Fix the build, needed to move around creation of exec * wip: Working on solving why there is a missing end. * fix: make `shared` module independent of `config.js` (#1870) * feat: Add in the kill and timeout * feat: Get the run to actually work. * chore: Add when/ why/ where comments * feat: Convert inject main to use exec main. * Fix: Ability to stop services * wip: long running js main * feat: Kill for the main * Fix * fix: Fix the build for x86 * wip: Working on changes * wip: Working on trying to kill js * fix: Testing for slow * feat: Test that the new manifest works * chore: Try and fix build? * chore: Fix? the build * chore: Fix the long input dies and never restarts * build improvements * no workdir * fix: Architecture for long running * chore: Fix and remove the docker inject * chore: Undo the changes to the kiosk mode * fix: Remove the it from the prod build * fix: Start issue * fix: The compat build * chore: Add in the conditional compilation again for the missing impl * chore: Change to aux * chore: Remove the aux for now * chore: Add some documentation to docker container Co-authored-by: Chris Guida <chrisguida@users.noreply.github.com> Co-authored-by: Aiden McClelland <3732071+dr-bonez@users.noreply.github.com> Co-authored-by: Aiden McClelland <me@drbonez.dev> Co-authored-by: Matt Hill <matthewonthemoon@gmail.com> Co-authored-by: Lucy C <12953208+elvece@users.noreply.github.com> Co-authored-by: Matt Hill <MattDHill@users.noreply.github.com> Co-authored-by: Alex Inkin <alexander@inkin.ru>
This commit is contained in:
108
libs/Cargo.lock
generated
108
libs/Cargo.lock
generated
@@ -47,6 +47,15 @@ dependencies = [
|
||||
"memchr",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ansi_term"
|
||||
version = "0.12.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d52a9bb7ec0cf484c551830a7ce27bd20d67eac647e1befb56b0be4ee39a55d2"
|
||||
dependencies = [
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "anyhow"
|
||||
version = "1.0.58"
|
||||
@@ -73,6 +82,27 @@ dependencies = [
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "async-stream"
|
||||
version = "0.3.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "dad5c83079eae9969be7fadefe640a1c566901f05ff91ab221de4b6f68d9507e"
|
||||
dependencies = [
|
||||
"async-stream-impl",
|
||||
"futures-core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "async-stream-impl"
|
||||
version = "0.3.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "10f203db73a71dfa2fb6dd22763990fa26f3d2625a6da2da900d23b87d26be27"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "async-trait"
|
||||
version = "0.1.56"
|
||||
@@ -414,6 +444,23 @@ version = "1.7.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3f107b87b6afc2a64fd13cac55fe06d6c8859f12d4b14cbcdd2c67d0976781be"
|
||||
|
||||
[[package]]
|
||||
name = "embassy_container_init"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"async-stream",
|
||||
"color-eyre",
|
||||
"futures",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"tokio",
|
||||
"tokio-stream",
|
||||
"tracing",
|
||||
"tracing-error 0.2.0",
|
||||
"tracing-futures",
|
||||
"tracing-subscriber 0.3.11",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "emver"
|
||||
version = "0.1.6"
|
||||
@@ -899,10 +946,12 @@ dependencies = [
|
||||
name = "js_engine"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"dashmap",
|
||||
"deno_ast",
|
||||
"deno_core",
|
||||
"dprint-swc-ext",
|
||||
"embassy_container_init",
|
||||
"helpers",
|
||||
"models",
|
||||
"reqwest",
|
||||
@@ -1071,6 +1120,15 @@ dependencies = [
|
||||
"cfg-if",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matchers"
|
||||
version = "0.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8263075bb86c5a1b1427b5ae862e8889656f126e9f77c484496e8b47cf5c5558"
|
||||
dependencies = [
|
||||
"regex-automata",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matches"
|
||||
version = "0.1.9"
|
||||
@@ -1123,11 +1181,13 @@ dependencies = [
|
||||
name = "models"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"embassy_container_init",
|
||||
"emver",
|
||||
"patch-db",
|
||||
"rand",
|
||||
"serde",
|
||||
"thiserror",
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1573,6 +1633,15 @@ dependencies = [
|
||||
"regex-syntax",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "regex-automata"
|
||||
version = "0.1.10"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132"
|
||||
dependencies = [
|
||||
"regex-syntax",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "regex-syntax"
|
||||
version = "0.6.27"
|
||||
@@ -2421,6 +2490,17 @@ dependencies = [
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tokio-stream"
|
||||
version = "0.1.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d660770404473ccd7bc9f8b28494a811bc18542b915c0855c51e8f419d5223ce"
|
||||
dependencies = [
|
||||
"futures-core",
|
||||
"pin-project-lite",
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tokio-util"
|
||||
version = "0.7.3"
|
||||
@@ -2503,6 +2583,27 @@ dependencies = [
|
||||
"tracing-subscriber 0.3.11",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-futures"
|
||||
version = "0.2.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "97d095ae15e245a057c8e8451bab9b3ee1e1f68e9ba2b4fbc18d0ac5237835f2"
|
||||
dependencies = [
|
||||
"pin-project",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-log"
|
||||
version = "0.1.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "78ddad33d2d10b1ed7eb9d1f518a5674713876e97e5bb9b7345a7984fbb4f922"
|
||||
dependencies = [
|
||||
"lazy_static",
|
||||
"log",
|
||||
"tracing-core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tracing-subscriber"
|
||||
version = "0.2.25"
|
||||
@@ -2520,9 +2621,16 @@ version = "0.3.11"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "4bc28f93baff38037f64e6f43d34cfa1605f27a49c34e8a04c5e78b0babf2596"
|
||||
dependencies = [
|
||||
"ansi_term",
|
||||
"lazy_static",
|
||||
"matchers",
|
||||
"regex",
|
||||
"sharded-slab",
|
||||
"smallvec",
|
||||
"thread_local",
|
||||
"tracing",
|
||||
"tracing-core",
|
||||
"tracing-log",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
@@ -4,5 +4,6 @@ members = [
|
||||
"snapshot-creator",
|
||||
"models",
|
||||
"js_engine",
|
||||
"helpers"
|
||||
"helpers",
|
||||
"embassy-container-init",
|
||||
]
|
||||
|
||||
@@ -19,7 +19,7 @@ docker run --rm $USE_TTY -v "$HOME/.cargo/registry":/root/.cargo/registry -v "$(
|
||||
cd -
|
||||
|
||||
echo "Creating Arm v8 Snapshot"
|
||||
docker run --platform linux/arm64/v8 --mount type=bind,src=$(pwd),dst=/mnt arm64v8/ubuntu:20.04 /bin/sh -c "cd /mnt && /mnt/target/aarch64-unknown-linux-gnu/release/snapshot-creator"
|
||||
docker run $USE_TTY --platform linux/arm64/v8 --mount type=bind,src=$(pwd),dst=/mnt arm64v8/ubuntu:20.04 /bin/sh -c "cd /mnt && /mnt/target/aarch64-unknown-linux-gnu/release/snapshot-creator"
|
||||
sudo chown -R $USER target
|
||||
sudo chown -R $USER ~/.cargo
|
||||
sudo chown $USER JS_SNAPSHOT.bin
|
||||
|
||||
30
libs/embassy-container-init/Cargo.toml
Normal file
30
libs/embassy-container-init/Cargo.toml
Normal file
@@ -0,0 +1,30 @@
|
||||
[package]
|
||||
name = "embassy_container_init"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
[features]
|
||||
dev = []
|
||||
metal = []
|
||||
sound = []
|
||||
unstable = []
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
[dependencies]
|
||||
async-stream = "0.3.*"
|
||||
color-eyre = "0.6.*"
|
||||
futures = "0.3.*"
|
||||
serde = { version = "1.*", features = ["derive", "rc"] }
|
||||
serde_json = "1.*"
|
||||
tokio = { version = "1.*", features = ["full"] }
|
||||
tokio-stream = { version = "0.1.11" }
|
||||
tracing = "0.1.*"
|
||||
tracing-error = "0.2.*"
|
||||
tracing-futures = "0.2.*"
|
||||
tracing-subscriber = { version = "0.3.*", features = ["env-filter"] }
|
||||
|
||||
[profile.test]
|
||||
opt-level = 3
|
||||
|
||||
[profile.dev.package.backtrace]
|
||||
opt-level = 3
|
||||
117
libs/embassy-container-init/src/lib.rs
Normal file
117
libs/embassy-container-init/src/lib.rs
Normal file
@@ -0,0 +1,117 @@
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::instrument;
|
||||
|
||||
/// The inputs that the executable is expecting
|
||||
pub type InputJsonRpc = JsonRpc<Input>;
|
||||
/// The outputs that the executable is expected to output
|
||||
pub type OutputJsonRpc = JsonRpc<Output>;
|
||||
|
||||
/// Based on the jsonrpc spec, but we are limiting the rpc to a subset
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq, PartialOrd, Ord)]
|
||||
#[serde(untagged)]
|
||||
pub enum RpcId {
|
||||
UInt(u32),
|
||||
}
|
||||
|
||||
/// We use the JSON rpc as the format to share between the stdin and stdout for the executable.
|
||||
/// Note: We are not allowing the id to not exist, used to ensure all pairs of messages are tracked
|
||||
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, Eq)]
|
||||
pub struct JsonRpc<T> {
|
||||
id: RpcId,
|
||||
#[serde(flatten)]
|
||||
pub version_rpc: VersionRpc<T>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq, Eq)]
|
||||
#[serde(tag = "jsonrpc", rename_all = "camelCase")]
|
||||
pub enum VersionRpc<T> {
|
||||
#[serde(rename = "2.0")]
|
||||
Two(T),
|
||||
}
|
||||
|
||||
impl<T> JsonRpc<T>
|
||||
where
|
||||
T: Serialize + for<'de> serde::Deserialize<'de> + std::fmt::Debug,
|
||||
{
|
||||
/// Using this to simplify creating this nested struct. Used for creating input mostly for executable stdin
|
||||
pub fn new(id: RpcId, body: T) -> Self {
|
||||
JsonRpc {
|
||||
id,
|
||||
version_rpc: VersionRpc::Two(body),
|
||||
}
|
||||
}
|
||||
/// Use this to get the data out of the probably destructed output
|
||||
pub fn into_pair(self) -> (RpcId, T) {
|
||||
let Self { id, version_rpc } = self;
|
||||
let VersionRpc::Two(body) = version_rpc;
|
||||
(id, body)
|
||||
}
|
||||
/// Used during the execution.
|
||||
#[instrument]
|
||||
pub fn maybe_serialize(&self) -> Option<String> {
|
||||
match serde_json::to_string(self) {
|
||||
Ok(x) => Some(x),
|
||||
Err(e) => {
|
||||
tracing::warn!("Could not stringify and skipping");
|
||||
tracing::debug!("{:?}", e);
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
/// Used during the execution
|
||||
#[instrument]
|
||||
pub fn maybe_parse(s: &str) -> Option<Self> {
|
||||
match serde_json::from_str::<Self>(s) {
|
||||
Ok(a) => Some(a),
|
||||
Err(e) => {
|
||||
tracing::warn!("Could not parse and skipping: {}", s);
|
||||
tracing::debug!("{:?}", e);
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Outputs embedded in the JSONRpc output of the executable.
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
|
||||
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
|
||||
pub enum Output {
|
||||
/// This is the line buffered output of the command
|
||||
Line(String),
|
||||
/// This is some kind of error with the program
|
||||
Error(String),
|
||||
/// Indication that the command is done
|
||||
Done(Option<i32>),
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, Clone, PartialEq, Eq)]
|
||||
#[serde(tag = "method", content = "params", rename_all = "camelCase")]
|
||||
pub enum Input {
|
||||
/// Create a new command, with the args
|
||||
Command { command: String, args: Vec<String> },
|
||||
/// Send the sigkill to the process
|
||||
Kill(),
|
||||
/// Send the sigterm to the process
|
||||
Term(),
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn example_echo_line() {
|
||||
let input = r#"{"id":0,"jsonrpc":"2.0","method":"command","params":{"command":"echo","args":["world I am here"]}}"#;
|
||||
let new_input = JsonRpc::<Input>::maybe_parse(input);
|
||||
assert!(new_input.is_some());
|
||||
assert_eq!(input, &serde_json::to_string(&new_input.unwrap()).unwrap());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn example_input_line() {
|
||||
let output = JsonRpc::new(RpcId::UInt(0), Output::Line("world I am here".to_string()));
|
||||
let output_str = output.maybe_serialize();
|
||||
assert!(output_str.is_some());
|
||||
let output_str = output_str.unwrap();
|
||||
assert_eq!(
|
||||
&output_str,
|
||||
r#"{"id":0,"jsonrpc":"2.0","method":"line","params":"world I am here"}"#
|
||||
);
|
||||
assert_eq!(output, serde_json::from_str(&output_str).unwrap());
|
||||
}
|
||||
296
libs/embassy-container-init/src/main.rs
Normal file
296
libs/embassy-container-init/src/main.rs
Normal file
@@ -0,0 +1,296 @@
|
||||
use std::{collections::BTreeMap, process::Stdio, sync::Arc};
|
||||
|
||||
use async_stream::stream;
|
||||
use futures::{pin_mut, Stream, StreamExt};
|
||||
use tokio::{
|
||||
io::AsyncBufReadExt,
|
||||
process::Child,
|
||||
select,
|
||||
sync::{oneshot, Mutex},
|
||||
};
|
||||
use tokio::{io::BufReader, process::Command};
|
||||
use tracing::instrument;
|
||||
|
||||
use embassy_container_init::{Input, InputJsonRpc, JsonRpc, Output, OutputJsonRpc, RpcId};
|
||||
|
||||
const MAX_COMMANDS: usize = 10;
|
||||
|
||||
enum DoneProgramStatus {
|
||||
Wait(Result<std::process::ExitStatus, std::io::Error>),
|
||||
Killed,
|
||||
}
|
||||
/// Created from the child and rpc, to prove that the cmd was the one who died
|
||||
struct DoneProgram {
|
||||
id: RpcId,
|
||||
status: DoneProgramStatus,
|
||||
}
|
||||
|
||||
/// Used to attach the running command with the rpc
|
||||
struct ChildAndRpc {
|
||||
id: RpcId,
|
||||
child: Child,
|
||||
}
|
||||
|
||||
impl ChildAndRpc {
|
||||
fn new(id: RpcId, mut command: tokio::process::Command) -> ::std::io::Result<Self> {
|
||||
Ok(Self {
|
||||
id,
|
||||
child: command.spawn()?,
|
||||
})
|
||||
}
|
||||
async fn wait(&mut self) -> DoneProgram {
|
||||
let status = DoneProgramStatus::Wait(self.child.wait().await);
|
||||
DoneProgram {
|
||||
id: self.id.clone(),
|
||||
status,
|
||||
}
|
||||
}
|
||||
async fn kill(mut self) -> DoneProgram {
|
||||
if let Err(err) = self.child.kill().await {
|
||||
let id = &self.id;
|
||||
tracing::error!("Error while trying to kill a process {id:?}");
|
||||
tracing::debug!("{err:?}");
|
||||
}
|
||||
DoneProgram {
|
||||
id: self.id.clone(),
|
||||
status: DoneProgramStatus::Killed,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Controlls the tracing + other io events
|
||||
/// Can get the inputs from stdin
|
||||
/// Can start a command from an intputrpc returning stream of outputs
|
||||
/// Can output to stdout
|
||||
#[derive(Debug, Clone)]
|
||||
struct Io {
|
||||
commands: Arc<Mutex<BTreeMap<RpcId, oneshot::Sender<()>>>>,
|
||||
ids: Arc<Mutex<BTreeMap<RpcId, u32>>>,
|
||||
}
|
||||
|
||||
impl Io {
|
||||
fn start() -> Self {
|
||||
use tracing_error::ErrorLayer;
|
||||
use tracing_subscriber::prelude::*;
|
||||
use tracing_subscriber::{fmt, EnvFilter};
|
||||
|
||||
let filter_layer = EnvFilter::new("embassy_container_init=trace");
|
||||
let fmt_layer = fmt::layer().with_target(true);
|
||||
|
||||
tracing_subscriber::registry()
|
||||
.with(filter_layer)
|
||||
.with(fmt_layer)
|
||||
.with(ErrorLayer::default())
|
||||
.init();
|
||||
color_eyre::install().unwrap();
|
||||
Self {
|
||||
commands: Default::default(),
|
||||
ids: Default::default(),
|
||||
}
|
||||
}
|
||||
|
||||
#[instrument]
|
||||
fn command(&self, input: InputJsonRpc) -> impl Stream<Item = OutputJsonRpc> {
|
||||
let io = self.clone();
|
||||
stream! {
|
||||
let (id, command) = input.into_pair();
|
||||
match command {
|
||||
Input::Command {
|
||||
ref command,
|
||||
ref args,
|
||||
} => {
|
||||
let mut cmd = Command::new(command);
|
||||
cmd.args(args);
|
||||
|
||||
cmd.stdout(Stdio::piped());
|
||||
cmd.stderr(Stdio::piped());
|
||||
let mut child_and_rpc = match ChildAndRpc::new(id.clone(), cmd) {
|
||||
Err(_e) => return,
|
||||
Ok(a) => a,
|
||||
};
|
||||
|
||||
if let Some(child_id) = child_and_rpc.child.id() {
|
||||
io.ids.lock().await.insert(id.clone(), child_id);
|
||||
}
|
||||
|
||||
let stdout = child_and_rpc.child
|
||||
.stdout
|
||||
.take()
|
||||
.expect("child did not have a handle to stdout");
|
||||
let stderr = child_and_rpc.child
|
||||
.stderr
|
||||
.take()
|
||||
.expect("child did not have a handle to stderr");
|
||||
|
||||
let mut buff_out = BufReader::new(stdout).lines();
|
||||
let mut buff_err = BufReader::new(stderr).lines();
|
||||
|
||||
let spawned = tokio::spawn({
|
||||
let id = id.clone();
|
||||
async move {
|
||||
let end_command_receiver = io.create_end_command(id.clone()).await;
|
||||
tokio::select!{
|
||||
waited = child_and_rpc
|
||||
.wait() => {
|
||||
io.clean_id(&waited).await;
|
||||
match &waited.status {
|
||||
DoneProgramStatus::Wait(Ok(st)) => return st.code(),
|
||||
DoneProgramStatus::Wait(Err(err)) => tracing::debug!("Child {id:?} got error: {err:?}"),
|
||||
DoneProgramStatus::Killed => tracing::debug!("Child {id:?} already killed?"),
|
||||
}
|
||||
|
||||
},
|
||||
_ = end_command_receiver => {
|
||||
let status = child_and_rpc.kill().await;
|
||||
io.clean_id(&status).await;
|
||||
},
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
});
|
||||
while let Ok(Some(line)) = buff_out.next_line().await {
|
||||
let output = Output::Line(line);
|
||||
let output = JsonRpc::new(id.clone(), output);
|
||||
tracing::trace!("OutputJsonRpc {{ id, output_rpc }} = {:?}", output);
|
||||
yield output;
|
||||
}
|
||||
while let Ok(Some(line)) = buff_err.next_line().await {
|
||||
yield JsonRpc::new(id.clone(), Output::Error(line));
|
||||
}
|
||||
let code = spawned.await.ok().flatten();
|
||||
yield JsonRpc::new(id, Output::Done(code));
|
||||
},
|
||||
Input::Kill() => {
|
||||
io.trigger_end_command(id).await;
|
||||
}
|
||||
Input::Term() => {
|
||||
io.term_by_rpc(&id).await;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
/// Used to get the string lines from the stdin
|
||||
fn inputs(&self) -> impl Stream<Item = String> {
|
||||
use std::io::BufRead;
|
||||
let (sender, receiver) = tokio::sync::mpsc::channel(100);
|
||||
tokio::task::spawn_blocking(move || {
|
||||
let stdin = std::io::stdin();
|
||||
for line in stdin.lock().lines().flatten() {
|
||||
tracing::trace!("Line = {}", line);
|
||||
sender.blocking_send(line).unwrap();
|
||||
}
|
||||
});
|
||||
tokio_stream::wrappers::ReceiverStream::new(receiver)
|
||||
}
|
||||
|
||||
///Convert a stream of string to stdout
|
||||
async fn output(&self, outputs: impl Stream<Item = String>) {
|
||||
pin_mut!(outputs);
|
||||
while let Some(output) = outputs.next().await {
|
||||
tracing::info!("{}", output);
|
||||
println!("{}", output);
|
||||
}
|
||||
}
|
||||
|
||||
/// Helper for the command fn
|
||||
/// Part of a pair for the signal map, that indicates that we should kill the command
|
||||
async fn trigger_end_command(&self, id: RpcId) {
|
||||
if let Some(command) = self.commands.lock().await.remove(&id) {
|
||||
if command.send(()).is_err() {
|
||||
tracing::trace!("Command {id:?} could not be ended, possible error or was done");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Helper for the command fn
|
||||
/// Part of a pair for the signal map, that indicates that we should kill the command
|
||||
async fn create_end_command(&self, id: RpcId) -> oneshot::Receiver<()> {
|
||||
let (send, receiver) = oneshot::channel();
|
||||
if let Some(other_command) = self.commands.lock().await.insert(id.clone(), send) {
|
||||
if other_command.send(()).is_err() {
|
||||
tracing::trace!(
|
||||
"Found other command {id:?} could not be ended, possible error or was done"
|
||||
);
|
||||
}
|
||||
}
|
||||
receiver
|
||||
}
|
||||
|
||||
/// Used during cleaning up a procress
|
||||
async fn clean_id(
|
||||
&self,
|
||||
done_program: &DoneProgram,
|
||||
) -> (Option<u32>, Option<oneshot::Sender<()>>) {
|
||||
(
|
||||
self.ids.lock().await.remove(&done_program.id),
|
||||
self.commands.lock().await.remove(&done_program.id),
|
||||
)
|
||||
}
|
||||
|
||||
/// Given the rpcid, will try and term the running command
|
||||
async fn term_by_rpc(&self, rpc: &RpcId) {
|
||||
let output = match self.remove_cmd_id(rpc).await {
|
||||
Some(id) => {
|
||||
let mut cmd = tokio::process::Command::new("kill");
|
||||
cmd.arg(format!("{id}"));
|
||||
cmd.output().await
|
||||
}
|
||||
None => return,
|
||||
};
|
||||
match output {
|
||||
Ok(_) => (),
|
||||
Err(err) => {
|
||||
tracing::error!("Could not kill rpc {rpc:?}");
|
||||
tracing::debug!("{err}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Used as a cleanup
|
||||
async fn term_all(self) {
|
||||
let ids: Vec<_> = self.ids.lock().await.keys().cloned().collect();
|
||||
for id in ids {
|
||||
self.term_by_rpc(&id).await;
|
||||
}
|
||||
}
|
||||
|
||||
async fn remove_cmd_id(&self, rpc: &RpcId) -> Option<u32> {
|
||||
self.ids.lock().await.remove(rpc)
|
||||
}
|
||||
}
|
||||
#[tokio::main]
|
||||
async fn main() {
|
||||
use futures::StreamExt;
|
||||
use tokio::signal::unix::{signal, SignalKind};
|
||||
let mut sigint = signal(SignalKind::interrupt()).unwrap();
|
||||
let mut sigterm = signal(SignalKind::terminate()).unwrap();
|
||||
let mut sigquit = signal(SignalKind::quit()).unwrap();
|
||||
let mut sighangup = signal(SignalKind::hangup()).unwrap();
|
||||
let io = Io::start();
|
||||
let outputs = io
|
||||
.inputs()
|
||||
.filter_map(|x| async move { InputJsonRpc::maybe_parse(&x) })
|
||||
.flat_map_unordered(MAX_COMMANDS, |x| io.command(x).boxed())
|
||||
.filter_map(|x| async move { x.maybe_serialize() });
|
||||
|
||||
select! {
|
||||
_ = io.output(outputs) => {
|
||||
tracing::debug!("Done with inputs/outputs")
|
||||
},
|
||||
_ = sigint.recv() => {
|
||||
tracing::debug!("Sigint")
|
||||
},
|
||||
_ = sigterm.recv() => {
|
||||
tracing::debug!("Sig Term")
|
||||
},
|
||||
_ = sigquit.recv() => {
|
||||
tracing::debug!("Sigquit")
|
||||
},
|
||||
_ = sighangup.recv() => {
|
||||
tracing::debug!("Sighangup")
|
||||
}
|
||||
}
|
||||
io.term_all().await;
|
||||
::std::process::exit(0);
|
||||
}
|
||||
@@ -1,9 +1,10 @@
|
||||
use std::future::Future;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::pin::Pin;
|
||||
use std::time::Duration;
|
||||
|
||||
use color_eyre::eyre::{eyre, Context, Error};
|
||||
use futures::future::BoxFuture;
|
||||
use futures::future::{pending, BoxFuture};
|
||||
use futures::FutureExt;
|
||||
use tokio::fs::File;
|
||||
use tokio::sync::oneshot;
|
||||
@@ -208,3 +209,80 @@ impl<T: 'static + Send> TimedResource<T> {
|
||||
self.ready.is_closed()
|
||||
}
|
||||
}
|
||||
|
||||
type SingThreadTask<T> = futures::future::Select<
|
||||
futures::future::Then<
|
||||
oneshot::Receiver<T>,
|
||||
futures::future::Either<futures::future::Ready<T>, futures::future::Pending<T>>,
|
||||
fn(
|
||||
Result<T, oneshot::error::RecvError>,
|
||||
)
|
||||
-> futures::future::Either<futures::future::Ready<T>, futures::future::Pending<T>>,
|
||||
>,
|
||||
futures::future::Then<
|
||||
JoinHandle<()>,
|
||||
futures::future::Pending<T>,
|
||||
fn(Result<(), JoinError>) -> futures::future::Pending<T>,
|
||||
>,
|
||||
>;
|
||||
|
||||
#[pin_project::pin_project(PinnedDrop)]
|
||||
pub struct SingleThreadJoinHandle<T> {
|
||||
abort: Option<oneshot::Sender<()>>,
|
||||
#[pin]
|
||||
task: SingThreadTask<T>,
|
||||
}
|
||||
impl<T: Send + 'static> SingleThreadJoinHandle<T> {
|
||||
pub fn new<Fut: Future<Output = T>>(fut: impl FnOnce() -> Fut + Send + 'static) -> Self {
|
||||
let (abort, abort_recv) = oneshot::channel();
|
||||
let (return_val_send, return_val) = oneshot::channel();
|
||||
fn unwrap_recv_or_pending<T>(
|
||||
res: Result<T, oneshot::error::RecvError>,
|
||||
) -> futures::future::Either<futures::future::Ready<T>, futures::future::Pending<T>>
|
||||
{
|
||||
match res {
|
||||
Ok(a) => futures::future::Either::Left(futures::future::ready(a)),
|
||||
_ => futures::future::Either::Right(pending()),
|
||||
}
|
||||
}
|
||||
fn make_pending<T>(_: Result<(), JoinError>) -> futures::future::Pending<T> {
|
||||
pending()
|
||||
}
|
||||
Self {
|
||||
abort: Some(abort),
|
||||
task: futures::future::select(
|
||||
return_val.then(unwrap_recv_or_pending),
|
||||
tokio::task::spawn_blocking(move || {
|
||||
tokio::runtime::Handle::current().block_on(async move {
|
||||
tokio::select! {
|
||||
_ = abort_recv.fuse() => (),
|
||||
res = fut().fuse() => {let _error = return_val_send.send(res);},
|
||||
}
|
||||
})
|
||||
})
|
||||
.then(make_pending),
|
||||
),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Send> Future for SingleThreadJoinHandle<T> {
|
||||
type Output = T;
|
||||
fn poll(
|
||||
self: std::pin::Pin<&mut Self>,
|
||||
cx: &mut std::task::Context<'_>,
|
||||
) -> std::task::Poll<Self::Output> {
|
||||
let this = self.project();
|
||||
this.task.poll(cx).map(|t| t.factor_first().0)
|
||||
}
|
||||
}
|
||||
|
||||
#[pin_project::pinned_drop]
|
||||
impl<T> PinnedDrop for SingleThreadJoinHandle<T> {
|
||||
fn drop(self: Pin<&mut Self>) {
|
||||
let this = self.project();
|
||||
if let Some(abort) = this.abort.take() {
|
||||
let _error = abort.send(());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,10 +6,12 @@ edition = "2021"
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
async-trait = "0.1.56"
|
||||
dashmap = "5.3.4"
|
||||
deno_core = "=0.136.0"
|
||||
deno_ast = { version = "=0.15.0", features = ["transpiling"] }
|
||||
dprint-swc-ext = "=0.1.1"
|
||||
embassy_container_init = { path = "../embassy-container-init" }
|
||||
reqwest = { version = "0.11.11" }
|
||||
swc_atoms = "=0.2.11"
|
||||
swc_common = "=0.18.7"
|
||||
|
||||
@@ -37,10 +37,37 @@ const writeFile = (
|
||||
const readFile = (
|
||||
{ volumeId = requireParam("volumeId"), path = requireParam("path") } = requireParam("options"),
|
||||
) => Deno.core.opAsync("read_file", volumeId, path);
|
||||
|
||||
|
||||
|
||||
const runDaemon = (
|
||||
{ command = requireParam("command"), args = [] } = requireParam("options"),
|
||||
) => {
|
||||
let id = Deno.core.opAsync("start_command", command, args);
|
||||
let waitPromise = null;
|
||||
return {
|
||||
async wait() {
|
||||
waitPromise = waitPromise || Deno.core.opAsync("wait_command", await id)
|
||||
return waitPromise
|
||||
},
|
||||
async term() {
|
||||
return Deno.core.opAsync("term_command", await id)
|
||||
}
|
||||
}
|
||||
};
|
||||
const runCommand = async (
|
||||
{ command = requireParam("command"), args = [], timeoutMillis = 30000 } = requireParam("options"),
|
||||
) => {
|
||||
let id = Deno.core.opAsync("start_command", command, args, timeoutMillis);
|
||||
return Deno.core.opAsync("wait_command", await id)
|
||||
};
|
||||
const sleep = (timeMs = requireParam("timeMs"),
|
||||
) => Deno.core.opAsync("sleep", timeMs);
|
||||
|
||||
const rename = (
|
||||
{
|
||||
srcVolume = requireParam("srcVolume"),
|
||||
dstVolume = requireParam("dstVolume"),
|
||||
dstVolume = requirePapram("dstVolume"),
|
||||
srcPath = requireParam("srcPath"),
|
||||
dstPath = requireParam("dstPath"),
|
||||
} = requireParam("options"),
|
||||
@@ -122,6 +149,9 @@ const effects = {
|
||||
removeDir,
|
||||
metadata,
|
||||
rename,
|
||||
runCommand,
|
||||
sleep,
|
||||
runDaemon
|
||||
};
|
||||
|
||||
const runFunction = jsonPointerValue(mainModule, currentFunction);
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
use std::collections::BTreeMap;
|
||||
use std::future::Future;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::pin::Pin;
|
||||
use std::sync::Arc;
|
||||
@@ -9,11 +11,12 @@ use deno_core::{
|
||||
resolve_import, Extension, JsRuntime, ModuleLoader, ModuleSource, ModuleSourceFuture,
|
||||
ModuleSpecifier, ModuleType, OpDecl, RuntimeOptions, Snapshot,
|
||||
};
|
||||
use helpers::{script_dir, NonDetachingJoinHandle};
|
||||
use models::{PackageId, ProcedureName, Version, VolumeId};
|
||||
use helpers::{script_dir, SingleThreadJoinHandle};
|
||||
use models::{ExecCommand, PackageId, ProcedureName, TermCommand, Version, VolumeId};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::Value;
|
||||
use tokio::io::AsyncReadExt;
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
pub trait PathForVolumeId: Send + Sync {
|
||||
fn path_for(
|
||||
@@ -80,6 +83,8 @@ const SNAPSHOT_BYTES: &[u8] = include_bytes!("./artifacts/JS_SNAPSHOT.bin");
|
||||
|
||||
#[cfg(target_arch = "aarch64")]
|
||||
const SNAPSHOT_BYTES: &[u8] = include_bytes!("./artifacts/ARM_JS_SNAPSHOT.bin");
|
||||
type WaitFns = Arc<Mutex<BTreeMap<u32, Pin<Box<dyn Future<Output = ResultType>>>>>>;
|
||||
|
||||
#[derive(Clone)]
|
||||
struct JsContext {
|
||||
sandboxed: bool,
|
||||
@@ -90,8 +95,17 @@ struct JsContext {
|
||||
volumes: Arc<dyn PathForVolumeId>,
|
||||
input: Value,
|
||||
variable_args: Vec<serde_json::Value>,
|
||||
command_inserter: ExecCommand,
|
||||
term_command: TermCommand,
|
||||
wait_fns: WaitFns,
|
||||
}
|
||||
#[derive(Debug, Clone, serde::Deserialize, serde::Serialize)]
|
||||
#[serde(rename_all = "kebab-case")]
|
||||
enum ResultType {
|
||||
Error(String),
|
||||
ErrorCode(i32, String),
|
||||
Result(serde_json::Value),
|
||||
}
|
||||
|
||||
#[derive(Clone, Default)]
|
||||
struct AnswerState(std::sync::Arc<deno_core::parking_lot::Mutex<Value>>);
|
||||
|
||||
@@ -162,6 +176,7 @@ impl ModuleLoader for ModsLoader {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pub struct JsExecutionEnvironment {
|
||||
sandboxed: bool,
|
||||
base_directory: PathBuf,
|
||||
@@ -169,6 +184,8 @@ pub struct JsExecutionEnvironment {
|
||||
package_id: PackageId,
|
||||
version: Version,
|
||||
volumes: Arc<dyn PathForVolumeId>,
|
||||
command_inserter: ExecCommand,
|
||||
term_command: TermCommand,
|
||||
}
|
||||
|
||||
impl JsExecutionEnvironment {
|
||||
@@ -177,7 +194,9 @@ impl JsExecutionEnvironment {
|
||||
package_id: &PackageId,
|
||||
version: &Version,
|
||||
volumes: Box<dyn PathForVolumeId>,
|
||||
) -> Result<Self, (JsError, String)> {
|
||||
command_inserter: ExecCommand,
|
||||
term_command: TermCommand,
|
||||
) -> Result<JsExecutionEnvironment, (JsError, String)> {
|
||||
let data_dir = data_directory.as_ref();
|
||||
let base_directory = data_dir;
|
||||
let js_code = JsCode({
|
||||
@@ -203,13 +222,15 @@ impl JsExecutionEnvironment {
|
||||
};
|
||||
buffer
|
||||
});
|
||||
Ok(Self {
|
||||
Ok(JsExecutionEnvironment {
|
||||
base_directory: base_directory.to_owned(),
|
||||
module_loader: ModsLoader { code: js_code },
|
||||
package_id: package_id.clone(),
|
||||
version: version.clone(),
|
||||
volumes: volumes.into(),
|
||||
sandboxed: false,
|
||||
command_inserter,
|
||||
term_command,
|
||||
})
|
||||
}
|
||||
pub fn read_only_effects(mut self) -> Self {
|
||||
@@ -234,12 +255,9 @@ impl JsExecutionEnvironment {
|
||||
));
|
||||
}
|
||||
};
|
||||
let safer_handle: NonDetachingJoinHandle<_> =
|
||||
tokio::task::spawn_blocking(move || self.execute(procedure_name, input, variable_args))
|
||||
.into();
|
||||
let output = safer_handle
|
||||
.await
|
||||
.map_err(|err| (JsError::Tokio, format!("Tokio gave us the error: {}", err)))??;
|
||||
let safer_handle =
|
||||
SingleThreadJoinHandle::new(move || self.execute(procedure_name, input, variable_args));
|
||||
let output = safer_handle.await?;
|
||||
match serde_json::from_value(output.clone()) {
|
||||
Ok(x) => Ok(x),
|
||||
Err(err) => {
|
||||
@@ -275,11 +293,15 @@ impl JsExecutionEnvironment {
|
||||
fns::get_variable_args::decl(),
|
||||
fns::set_value::decl(),
|
||||
fns::is_sandboxed::decl(),
|
||||
fns::start_command::decl(),
|
||||
fns::wait_command::decl(),
|
||||
fns::sleep::decl(),
|
||||
fns::term_command::decl(),
|
||||
]
|
||||
}
|
||||
|
||||
fn execute(
|
||||
&self,
|
||||
async fn execute(
|
||||
self,
|
||||
procedure_name: ProcedureName,
|
||||
input: Value,
|
||||
variable_args: Vec<serde_json::Value>,
|
||||
@@ -304,6 +326,9 @@ impl JsExecutionEnvironment {
|
||||
sandboxed: self.sandboxed,
|
||||
input,
|
||||
variable_args,
|
||||
command_inserter: self.command_inserter.clone(),
|
||||
term_command: self.term_command.clone(),
|
||||
wait_fns: Default::default(),
|
||||
};
|
||||
let ext = Extension::builder()
|
||||
.ops(Self::declarations())
|
||||
@@ -321,25 +346,25 @@ impl JsExecutionEnvironment {
|
||||
startup_snapshot: Some(Snapshot::Static(SNAPSHOT_BYTES)),
|
||||
..Default::default()
|
||||
};
|
||||
let mut runtime = JsRuntime::new(runtime_options);
|
||||
let runtime = Arc::new(Mutex::new(JsRuntime::new(runtime_options)));
|
||||
|
||||
let future = async move {
|
||||
let mod_id = runtime
|
||||
.lock()
|
||||
.await
|
||||
.load_main_module(&"file:///loadModule.js".parse().unwrap(), None)
|
||||
.await?;
|
||||
let evaluated = runtime.mod_evaluate(mod_id);
|
||||
let res = runtime.run_event_loop(false).await;
|
||||
let evaluated = runtime.lock().await.mod_evaluate(mod_id);
|
||||
let res = runtime.lock().await.run_event_loop(false).await;
|
||||
res?;
|
||||
evaluated.await??;
|
||||
Ok::<_, AnyError>(())
|
||||
};
|
||||
|
||||
tokio::runtime::Handle::current()
|
||||
.block_on(future)
|
||||
.map_err(|e| {
|
||||
tracing::debug!("{:?}", e);
|
||||
(JsError::Javascript, format!("{}", e))
|
||||
})?;
|
||||
future.await.map_err(|e| {
|
||||
tracing::debug!("{:?}", e);
|
||||
(JsError::Javascript, format!("{}", e))
|
||||
})?;
|
||||
|
||||
let answer = answer_state.0.lock().clone();
|
||||
Ok(answer)
|
||||
@@ -348,23 +373,24 @@ impl JsExecutionEnvironment {
|
||||
|
||||
/// Note: Make sure that we have the assumption that all these methods are callable at any time, and all call restrictions should be in rust
|
||||
mod fns {
|
||||
use std::cell::RefCell;
|
||||
use std::collections::BTreeMap;
|
||||
use std::convert::TryFrom;
|
||||
use std::os::unix::fs::MetadataExt;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::rc::Rc;
|
||||
use std::{cell::RefCell, time::Duration};
|
||||
|
||||
use deno_core::anyhow::{anyhow, bail};
|
||||
use deno_core::error::AnyError;
|
||||
use deno_core::*;
|
||||
use embassy_container_init::RpcId;
|
||||
use helpers::{to_tmp_path, AtomicFile};
|
||||
use models::VolumeId;
|
||||
use models::{TermCommand, VolumeId};
|
||||
use serde_json::Value;
|
||||
use tokio::io::AsyncWriteExt;
|
||||
|
||||
use super::{AnswerState, JsContext};
|
||||
use crate::{system_time_as_unix_ms, MetadataJs};
|
||||
use crate::{system_time_as_unix_ms, MetadataJs, ResultType};
|
||||
|
||||
#[derive(serde::Serialize, serde::Deserialize, Debug, Clone, Default)]
|
||||
struct FetchOptions {
|
||||
@@ -386,10 +412,13 @@ mod fns {
|
||||
url: url::Url,
|
||||
options: Option<FetchOptions>,
|
||||
) -> Result<FetchResponse, AnyError> {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let sandboxed = {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
ctx.sandboxed
|
||||
};
|
||||
|
||||
if ctx.sandboxed {
|
||||
if sandboxed {
|
||||
bail!("Will not run fetch in sandboxed mode");
|
||||
}
|
||||
|
||||
@@ -432,7 +461,7 @@ mod fns {
|
||||
body: response.text().await.ok(),
|
||||
};
|
||||
|
||||
return Ok(fetch_response);
|
||||
Ok(fetch_response)
|
||||
}
|
||||
|
||||
#[op]
|
||||
@@ -441,12 +470,13 @@ mod fns {
|
||||
volume_id: VolumeId,
|
||||
path_in: PathBuf,
|
||||
) -> Result<String, AnyError> {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
let volume_path = {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
ctx.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?
|
||||
};
|
||||
//get_path_for in volume.rs
|
||||
let new_file = volume_path.join(path_in);
|
||||
if !is_subset(&volume_path, &new_file).await? {
|
||||
@@ -465,12 +495,13 @@ mod fns {
|
||||
volume_id: VolumeId,
|
||||
path_in: PathBuf,
|
||||
) -> Result<MetadataJs, AnyError> {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
let volume_path = {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
ctx.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?
|
||||
};
|
||||
//get_path_for in volume.rs
|
||||
let new_file = volume_path.join(path_in);
|
||||
if !is_subset(&volume_path, &new_file).await? {
|
||||
@@ -517,13 +548,16 @@ mod fns {
|
||||
path_in: PathBuf,
|
||||
write: String,
|
||||
) -> Result<(), AnyError> {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
if ctx.volumes.readonly(&volume_id) {
|
||||
let (volumes, volume_path) = {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
(ctx.volumes.clone(), volume_path)
|
||||
};
|
||||
if volumes.readonly(&volume_id) {
|
||||
bail!("Volume {} is readonly", volume_id);
|
||||
}
|
||||
|
||||
@@ -566,17 +600,20 @@ mod fns {
|
||||
dst_volume: VolumeId,
|
||||
dst_path: PathBuf,
|
||||
) -> Result<(), AnyError> {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &src_volume)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", src_volume))?;
|
||||
let volume_path_out = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &dst_volume)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", dst_volume))?;
|
||||
if ctx.volumes.readonly(&dst_volume) {
|
||||
let (volumes, volume_path, volume_path_out) = {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &src_volume)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", src_volume))?;
|
||||
let volume_path_out = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &dst_volume)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", dst_volume))?;
|
||||
(ctx.volumes.clone(), volume_path, volume_path_out)
|
||||
};
|
||||
if volumes.readonly(&dst_volume) {
|
||||
bail!("Volume {} is readonly", dst_volume);
|
||||
}
|
||||
|
||||
@@ -614,13 +651,16 @@ mod fns {
|
||||
volume_id: VolumeId,
|
||||
path_in: PathBuf,
|
||||
) -> Result<(), AnyError> {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
if ctx.volumes.readonly(&volume_id) {
|
||||
let (volumes, volume_path) = {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
(ctx.volumes.clone(), volume_path)
|
||||
};
|
||||
if volumes.readonly(&volume_id) {
|
||||
bail!("Volume {} is readonly", volume_id);
|
||||
}
|
||||
let new_file = volume_path.join(path_in);
|
||||
@@ -641,13 +681,16 @@ mod fns {
|
||||
volume_id: VolumeId,
|
||||
path_in: PathBuf,
|
||||
) -> Result<(), AnyError> {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
if ctx.volumes.readonly(&volume_id) {
|
||||
let (volumes, volume_path) = {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
(ctx.volumes.clone(), volume_path)
|
||||
};
|
||||
if volumes.readonly(&volume_id) {
|
||||
bail!("Volume {} is readonly", volume_id);
|
||||
}
|
||||
let new_file = volume_path.join(path_in);
|
||||
@@ -668,13 +711,16 @@ mod fns {
|
||||
volume_id: VolumeId,
|
||||
path_in: PathBuf,
|
||||
) -> Result<(), AnyError> {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
if ctx.volumes.readonly(&volume_id) {
|
||||
let (volumes, volume_path) = {
|
||||
let state = state.borrow();
|
||||
let ctx: &JsContext = state.borrow();
|
||||
let volume_path = ctx
|
||||
.volumes
|
||||
.path_for(&ctx.datadir, &ctx.package_id, &ctx.version, &volume_id)
|
||||
.ok_or_else(|| anyhow!("There is no {} in volumes", volume_id))?;
|
||||
(ctx.volumes.clone(), volume_path)
|
||||
};
|
||||
if volumes.readonly(&volume_id) {
|
||||
bail!("Volume {} is readonly", volume_id);
|
||||
}
|
||||
let new_file = volume_path.join(path_in);
|
||||
@@ -777,6 +823,102 @@ mod fns {
|
||||
Ok(ctx.sandboxed)
|
||||
}
|
||||
|
||||
#[op]
|
||||
async fn term_command(state: Rc<RefCell<OpState>>, id: u32) -> Result<(), AnyError> {
|
||||
let term_command_impl: TermCommand = {
|
||||
let state = state.borrow();
|
||||
let ctx = state.borrow::<JsContext>();
|
||||
ctx.term_command.clone()
|
||||
};
|
||||
if let Err(err) = term_command_impl(embassy_container_init::RpcId::UInt(id)).await {
|
||||
bail!("{}", err);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[op]
|
||||
async fn start_command(
|
||||
state: Rc<RefCell<OpState>>,
|
||||
command: String,
|
||||
args: Vec<String>,
|
||||
timeout: Option<u64>,
|
||||
) -> Result<u32, AnyError> {
|
||||
use embassy_container_init::Output;
|
||||
let (command_inserter, wait_fns) = {
|
||||
let state = state.borrow();
|
||||
let ctx = state.borrow::<JsContext>();
|
||||
(ctx.command_inserter.clone(), ctx.wait_fns.clone())
|
||||
};
|
||||
|
||||
let (sender, mut receiver) = tokio::sync::mpsc::unbounded_channel::<Output>();
|
||||
let id = match command_inserter(
|
||||
command,
|
||||
args.into_iter().collect(),
|
||||
sender,
|
||||
timeout.map(std::time::Duration::from_millis),
|
||||
)
|
||||
.await
|
||||
{
|
||||
Err(err) => bail!(err),
|
||||
Ok(RpcId::UInt(a)) => a,
|
||||
};
|
||||
|
||||
let wait = async move {
|
||||
let mut answer = String::new();
|
||||
let mut command_error = String::new();
|
||||
let mut status: Option<i32> = None;
|
||||
while let Some(output) = receiver.recv().await {
|
||||
match output {
|
||||
Output::Line(value) => {
|
||||
answer.push_str(&value);
|
||||
answer.push('\n');
|
||||
}
|
||||
Output::Error(error) => {
|
||||
command_error.push_str(&error);
|
||||
command_error.push('\n');
|
||||
}
|
||||
Output::Done(error_code) => {
|
||||
status = error_code;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
if !command_error.is_empty() {
|
||||
if let Some(status) = status {
|
||||
return ResultType::ErrorCode(status, command_error);
|
||||
}
|
||||
|
||||
return ResultType::Error(command_error);
|
||||
}
|
||||
|
||||
ResultType::Result(serde_json::Value::String(answer))
|
||||
};
|
||||
wait_fns.lock().await.insert(id, Box::pin(wait));
|
||||
Ok(id)
|
||||
}
|
||||
|
||||
#[op]
|
||||
async fn wait_command(state: Rc<RefCell<OpState>>, id: u32) -> Result<ResultType, AnyError> {
|
||||
let wait_fns = {
|
||||
let state = state.borrow();
|
||||
let ctx = state.borrow::<JsContext>();
|
||||
ctx.wait_fns.clone()
|
||||
};
|
||||
|
||||
let found_future = match wait_fns.lock().await.remove(&id) {
|
||||
Some(a) => a,
|
||||
None => bail!("No future for id {id}, could have been removed already"),
|
||||
};
|
||||
|
||||
Ok(found_future.await)
|
||||
}
|
||||
#[op]
|
||||
async fn sleep(time_ms: u64) -> Result<(), AnyError> {
|
||||
tokio::time::sleep(Duration::from_millis(time_ms)).await;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// We need to make sure that during the file accessing, we don't reach beyond our scope of control
|
||||
async fn is_subset(
|
||||
parent: impl AsRef<Path>,
|
||||
|
||||
@@ -6,10 +6,12 @@ edition = "2021"
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
embassy_container_init = { path = "../embassy-container-init" }
|
||||
emver = { version = "0.1", features = ["serde"] }
|
||||
patch-db = { version = "*", path = "../../patch-db/patch-db", features = [
|
||||
"trace",
|
||||
] }
|
||||
serde = { version = "1.0", features = ["derive", "rc"] }
|
||||
thiserror = "1.0"
|
||||
emver = { version = "0.1", features = ["serde"] }
|
||||
rand = "0.8"
|
||||
tokio = { version = "1", features = ["full"] }
|
||||
thiserror = "1.0"
|
||||
@@ -6,6 +6,7 @@ mod interface_id;
|
||||
mod invalid_id;
|
||||
mod package_id;
|
||||
mod procedure_name;
|
||||
mod type_aliases;
|
||||
mod version;
|
||||
mod volume_id;
|
||||
|
||||
@@ -17,5 +18,6 @@ pub use interface_id::*;
|
||||
pub use invalid_id::*;
|
||||
pub use package_id::*;
|
||||
pub use procedure_name::*;
|
||||
pub use type_aliases::*;
|
||||
pub use version::*;
|
||||
pub use volume_id::*;
|
||||
|
||||
@@ -35,7 +35,7 @@ impl ProcedureName {
|
||||
}
|
||||
pub fn js_function_name(&self) -> Option<String> {
|
||||
match self {
|
||||
ProcedureName::Main => None,
|
||||
ProcedureName::Main => Some("/main".to_string()),
|
||||
ProcedureName::LongRunning => None,
|
||||
ProcedureName::CreateBackup => Some("/createBackup".to_string()),
|
||||
ProcedureName::RestoreBackup => Some("/restoreBackup".to_string()),
|
||||
|
||||
25
libs/models/src/type_aliases.rs
Normal file
25
libs/models/src/type_aliases.rs
Normal file
@@ -0,0 +1,25 @@
|
||||
use std::{future::Future, pin::Pin, sync::Arc, time::Duration};
|
||||
|
||||
use embassy_container_init::RpcId;
|
||||
use tokio::sync::mpsc::UnboundedSender;
|
||||
|
||||
/// Used by the js-executor, it is the ability to just create a command in an already running exec
|
||||
pub type ExecCommand = Arc<
|
||||
dyn Fn(
|
||||
String,
|
||||
Vec<String>,
|
||||
UnboundedSender<embassy_container_init::Output>,
|
||||
Option<Duration>,
|
||||
) -> Pin<Box<dyn Future<Output = Result<RpcId, String>> + 'static>>
|
||||
+ Send
|
||||
+ Sync
|
||||
+ 'static,
|
||||
>;
|
||||
|
||||
/// Used by the js-executor, it is the ability to just create a command in an already running exec
|
||||
pub type TermCommand = Arc<
|
||||
dyn Fn(RpcId) -> Pin<Box<dyn Future<Output = Result<(), String>> + 'static>>
|
||||
+ Send
|
||||
+ Sync
|
||||
+ 'static,
|
||||
>;
|
||||
Reference in New Issue
Block a user