项目作者: krystianity

项目描述 :
:beers: module-as-a-process cluster management :beer:
高级语言: JavaScript
项目地址: git://github.com/krystianity/eisenhertz.git
创建时间: 2017-06-22T21:46:20Z
项目社区:https://github.com/krystianity/eisenhertz

开源协议:MIT License

下载


Eisenhertz

Build Status

nodejs module-as-a-process cluster management

What does it do?

  • Eisenhertz excels at one thing only:
    keeping a set of dynamic module execution up & running across endless Servers, VMs, Containers
  • You pass in a module to execute and an unlimited amount of config-data (jobs) for each execution
    and eisenhertz will ensure that the correct amount of modules is constantly running across the
    all instances of itself, where each module runs in its own process
  • It also gives you controll to manually add or remove such jobs in real-time
  • Additionally you can talk to the processes via ipc and retrieve metrics from all processes
    running
  • Eisenhertz does not work as a stand-alone “server-setup”, its main idea is to build a basis
    for a project that requires scaling across a lot of machines in stand-alone processes

Requirements

  • Eisenhertz does heavily rely on async/await therefore you will need at least node >=v 7.0
  • The message cortex and job queue relies on Redis >= 2.8.18

Install via

  1. npm i eisenhertz

Server Setup

  1. const {
  2. Eisenhertz,
  3. defaultConfig,
  4. defaultLogger
  5. } = require("eisenhertz");
  6. const fetchJobNames = callback => {
  7. callback(null, [
  8. "one",
  9. "two"
  10. ]);
  11. };
  12. const fetchJobDetails = (id, callback) => {
  13. let config = {};
  14. switch (id) {
  15. case "one":
  16. config.port = 1337;
  17. config.hi = "hi from one";
  18. break;
  19. case "two":
  20. config.port = 1338;
  21. config.hi = "hi from two";
  22. break;
  23. }
  24. callback(null, {
  25. config
  26. });
  27. };
  28. const eisenhertz = new Eisenhertz(config, defaultLogger());
  29. eisenhertz
  30. .start(fetchJobNames, fetchJobDetails)
  31. .then(() => {});

Fork-Module Setup

  1. const { ForkProcess } = require("eisenhertz");
  2. const express = require("express");
  3. const fork = new ForkProcess();
  4. let incomingRequests = 0;
  5. const processCallback = data => {
  6. const app = express();
  7. app.get("/hi", (req, res) => {
  8. incomingRequests++;
  9. res.status(200).json({
  10. message: data.config.hi
  11. });
  12. });
  13. app.listen(data.config.port, () => {
  14. fork.log("ready");
  15. });
  16. };
  17. const metricsCallback = cb => {
  18. cb(null, {
  19. incomingRequests
  20. });
  21. };
  22. fork.connect(processCallback, metricsCallback);

Example Setup Description

  • The example setup above will give you the possiblity to scale
    a demo webserver across unlimited instances, by simply deploying the
    server module to servers, vms or containers.
  • As soon as it starts, it will spawn 2 processes on any of the
    parent systems that will run one of the two webservers.

Configuration

  1. {
  2. prefix: "eh",
  3. redis: {
  4. host: "localhost",
  5. port: 6379,
  6. db: 7
  7. },
  8. redlock: {
  9. driftFactor: 0.01,
  10. retryCount: 2,
  11. retryDelay: 200,
  12. retryJitter: 200
  13. },
  14. settings: {
  15. lockDuration: 4500,
  16. stalledInterval: 4500,
  17. maxStalledCount: 1,
  18. guardInterval: 2500,
  19. retryProcessDelay: 2500
  20. },
  21. properties: {
  22. name: "eh:empty",
  23. maxJobsPerWorker: 2,
  24. masterLock: "eh:master:lock",
  25. masterLockTtl: 2000,
  26. masterLockReAttempt: 4000,
  27. maxInstancesOfJobPerNode: 1
  28. },
  29. jobOptions: {
  30. priority: 1,
  31. delay: 1000,
  32. attempts: 1, //dont touch
  33. repeat: undefined, //dont touch
  34. backoff: undefined, //dont touch
  35. lifo: undefined, //dont touch
  36. timeout: undefined, //dont touch
  37. jobId: undefined, // will be set by TaskHandler
  38. removeOnComplete: true, //dont touch
  39. removeOnFail: true //dont touch
  40. },
  41. fork: {
  42. module: "./fork/ForkProcess.js"
  43. }
  44. }

Controlling jobs on nodes

  1. config.properties.maxInstancesOfJobPerNode
  2. /*
  3. lets you limit the amount of instances of a job
  4. that run on a single node, you can define a job instance
  5. by using ":" as delimiter e.g. jobOne:1, jobOne:2 and jobOne:3
  6. if the limit is reached, the node will return the job with
  7. an error back to the queue after a small timeout
  8. */
  9. config.properties.maxJobsPerWorker
  10. /*
  11. lets you limit the amount of jobs per worker
  12. it is usually a good idea to limit this to the amount
  13. of cores (* 2 on intel systems) of the node's host
  14. */