Network Monitoring


Have a monitoring service for sokol and core networks.

  • Log data every N number of minutes for later review and analysis and history/graphs
  • Notify in case of network spike, changes in number of active nodes, average block times spikes, etc.

Different Solutions:

  1. Get all data parsed from netstats pages: sokol and core

    • We could have a separate service/script that would be running every N number of minutes, taking snapshot of all necessary data and then log it into Google spreadsheet or some other log file.
  • Pros:

    • All data is conveniently aggregated on netstat page, so we could query it from browser (JS dynamic data can not be queered directly. See next item)
    • Fairly easy to setup. We could use Python and Selenium WebDriver in order to get JavaScript dynamically generated data (open browser session with WebDriver and get all info every N number of minutes)
    • Prototype is ready, will share it soon
  • Cons:

    • This will depend on centralized, netstats website. We would need to trust this data, which might not be the best option.
  1. Also use netstats websites, but enhance them and log this data somewhere.
  • Pros:

    • Source code is available
    • Should not be hard to log data and trigger events based on parameters
  • Cons:

    • Same as in item #1 (see above)
  1. Aggregate and parse all data directly from blockchain and then post it into Google docs
  • Pros:

    • Most trusted and accurate information
  • Cons:

    • Have to write code to aggregate all necessary information

Please share your thought and ideas on what would be the best implementation

I would love a solution that could dump this data into a google sheet, as then it would allow for many people to experiment and play around with.

I tried using Google’s builtin functions (=importxml and others for example), but because the data is dynamically generated via JS - these tools don’t work.

I am able to get JS data every N number of minutes. (Using option #1 since it was easiest and fastest route to get something working)

The way it works:

  1. Python script opens a Browser session
  2. Waits several seconds for netstats page to load
  3. Get necessary data (for now I get only number of active and total nodes, and list of average block times)
  4. Parse it and make it ready for import
  5. Close browser session
  6. Then we could repeat in 5 or 10 minutes

I’ll try to import to Google docs and reply here

1 Like

But you’re having the python script running on your machine. What if my machine goes down, or I don’t remember to run the script?

If one is running on a local machine - couldn’t you look every 2.5 sec. Maybe even add a comparison the current blockheight to a local variable containing the prior blockheight, and if the two are the same exit out, otherwise write the data to a file/google sheet.

Yes, it could check every 2-3 seconds and then compare data

For now I am running locally from my machine, but if we choose this approach, this script could be executed from a dedicated server.

But looks like it would be better to enhance netstats website and log this data at the same time of displaying it. aggregates all data, so it could also log it

For aproach #1, we don’t really need to get data every 2.5 seconds (at least not for block time). We get about 40 blocks at the time, so we could wait a bit longer for next set:

Here is a sample of data that we get in one shot:

Number of active nodes is: 19/22

Block Time:

This is definitely not the best solution, but here is a python script (in progress) in case anybody is interested and just would like to learn:

Requrements for Windows:

  • Firefox WebDriver (also added to path)
  • Python
  • Selenium
  • BeautifulSoup

It has comments on things that need to be added

1 Like

can you have this script export the address that mined the block on the same line as the block time? This could help see who is not mining blocks when.

I would think that the script would also need to be able to ‘know’ who (and be able to reach out - perhaps email) asking what’s what. We are generating a block every 5 sec’s - storing this info in a spreadsheet is out (and all the pre-baked scripts). But then it would need to be constantly running and be maintained by someone…

The ‘hack’ that I’ve got - a raspberry pi w/the 7 inch screen in kiosk mode and with my validators pinned on top. More artsy than anything - an automated solution would be better for the long term. Or even buddies to check in with one another.

1 Like

I’ve tried a couple of things yesterday night… It’s not straight forward, since all info is coming from netstats website and when we query block times, it’s still unclear which key has larger block time. I would need to add a logic to figure it out, and then it could be printed.

As for where to run it and how to maintain, for long term solution we could dedicate 1 or more VMs for this and keep it running all the time. It could also notify the owner if there is a problem.

Great idea about RP! I might try that as well later. Does it check your validator nodes and shows information on the screen?

I think it would be much better solution if I stop using netstats page and just query all information directly from POA network using Web3… WIll update once I have some results…

My nodes are pinned, so they are at the top.

For for the more direct route - that’s probably a better path. And then you can dump the ‘data’ into a flat file and locally run your program.

1 Like

Here’s my “solution” - Cola for size comparison:: (A hack to be sure)

1 Like