It's worth getting familiar with the basic concepts that comprise BrainRex as they are used throughout the documentation. This knowledge will be helpful as you proceed and is also cool to brag about amongst friends.
BrainRex would be junk if it couldn't ingest data. A "source" defines where BrainRexshould pull data from, or how it should receive data pushed to it. A pipelinecan have any number of sources, and as they ingest data they proceed tonormalize it into [events](#events) \(see next section\). This sets the stage
A "transform" are machine learning models that can learn how to tranforms raw unstructured data from
A "destination" is a destination for events. Each sink's
interacting with. For example, the [
socket sink][docs.sinks.÷] will
design and transmission method is dictated by the downstream service it is
stream individual events, while the
aws_s3 sink will
buffer and flush data.
All items (both logs and metrics) passing through Vector are known as an "event", which is explained in detail in the data model section.