Driver Metron Innovative System (Pvt.) Ltd. Lahore, Pakistan. Posted Sep 05, 2019 376 views. Join Metro train driver Emma Ryan as she gets behind the controls on the Sandringham line. Add to Wishlist Mentor® by eDriving For Amazon DSPs is an app for drivers of Delivery Service Providers engaged by Amazon. This app helps drivers improve their driving safety by measuring their. METRON is an international company developing digital platforms for the energy optimization of industries. More than 100 manufacturers around the world are already optimizing and controlling their energy flows thanks to METRON’s solutions.
Metron Docker is a Docker Compose application that is intended only for development and integration testing of Metron. These images can quickly spin-up the underlying components on which Apache Metron runs.
None of the core Metron components are setup or launched automatically with these Docker images. You will need to manually setup and start the Metron components that you require. You should not expect to see telemetry being parsed, enriched, or indexed. If you are looking to try-out, experiment or demo Metron capabilities on a single node, then the Vagrant-driven VM is what you need. Use this instead of Vagrant when:
- You want an environment that can be built and spun up quickly
- You need to frequently rebuild and restart services
- You only need to test, troubleshoot or develop against a subset of services
Metron Docker includes these images that have been customized for Metron:
- Kafka (with Zookeeper)
- HBase
- Storm
- Elasticsearch
- Kibana
- HDFS
Setup
Install Docker for Mac or Docker for Windows. The following versions have been tested:
- Docker version 1.12.0
- docker-machine version 0.8.0
- docker-compose version 1.8.0
Build Metron from the top level directory with:
You are welcome to use an existing Docker host but we prefer one with more resources. You can create one of those with this script:
This will create a host called “metron-machine”. Anytime you want to run Docker commands against this host, make sure you run this first to set the Docker environment variables:
If you wish to use a local docker-engine install, please set an environment variable BROKER_IP_ADDR to the IP address of your host machine. This cannot be the loopback address.
Usage
Navigate to the compose application root:
The Metron Docker environment lifecycle is controlled by the docker-compose command. The service names can be found in the docker-compose.yml file. For example, to build and start the environment run this command:
After all services have started list the containers and ensure their status is ‘Up’:
Various services are exposed through http on the Docker host. Get the host ip from the URL property:
Then, assuming a host ip of 192.168.99.100, the UIs and APIs are available at:
- Storm - http://192.168.99.100:8080/
- HBase - http://192.168.99.100:16010/
- Elasticsearch - http://192.168.99.100:9200/_plugin/head/
- Kibana - http://192.168.99.100:5601/
- HDFS (Namenode) - http://192.168.99.100:50070/
The Storm logs can be useful when troubleshooting topologies. They can be found on the Storm container in /usr/share/apache-storm/logs.
When done using the machine, shut it down with:
Examples
Deploy a new parser class
After adding a new parser to metron-parsers-common, build Metron from the top level directory:
Then run these commands to redeploy the parsers to the Storm image:
Connect to a container
Suppose there is a problem with Kafka and the logs are needed for further investigation. Run this command to connect and explore the running Kafka container:
Create a sensor from sample data
A tool for producing test data in Kafka is included with the Kafka/Zookeeper image. It loops through lines in a test data file and outputs them to Kafka at the desired frequency. Create a test data file in ./kafkazk/data/ and rebuild the Kafka/Zookeeper image:
This will deploy the test data file to the Kafka/Zookeeper container. Now that data can be streamed to a Kafka topic:
The Kafka/Zookeeper image comes with sample Bro and Squid data:
Upload configs to Zookeeper
Parser configs and a global config configured for this Docker environment are included with the Kafka/Zookeeper image. Load them with:
Dump out the configs with:
Manage a topology
The Storm image comes with a script to easily start parser topologies:
The enrichment topology can be started with:
The indexing topology can be started with:
Topologies can be stopped using the Storm CLI. For example, stop the enrichment topology with:
Run sensor data end to end
First ensure configs were uploaded as described in the previous example. Then start a sensor and leave it running:
Open a separate console session and verify the sensor is running by consuming a message from Kafka:
A new message should be printed every second. Now kill the consumer and start the Bro parser topology:
Bro data should be flowing through the bro parser topology and into the Kafka enrichments topic. The enrichments topic should be created automatically:
Verify parsed Bro data is in the Kafka enrichments topic:
Now start the enrichment topology:
Parsed Bro data should be flowing through the enrichment topology and into the Kafka indexing topic. Verify enriched Bro data is in the Kafka indexing topic:
Now start the indexing topology:
Drivers Metronidazole
Enriched Bro data should now be present in the Elasticsearch container: