kafka not secured source Kafka Not Secured Source

Provided by: "Apache Software Foundation"

Support Level for this Kamelet is: "Preview"

Receive data from Kafka topics on an insecure broker.

Configuration Options

The following table summarizes the configuration options available for the kafka-not-secured-source Kamelet:

Property Name Description Type Default Example

brokers *

Brokers

Comma separated list of Kafka Broker URLs

string

topic *

Topic Names

Comma separated list of Kafka topic names

string

allowManualCommit

Allow Manual Commit

Whether to allow doing manual commits

boolean

false

autoCommitEnable

Auto Commit Enable

If true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer

boolean

true

autoOffsetReset

Auto Offset Reset

What to do when there is no initial offset. There are 3 enums and the value can be one of latest, earliest, none

string

"latest"

consumerGroup

Consumer Group

A string that uniquely identifies the group of consumers to which this source belongs

string

"my-group-id"

pollOnError

Poll On Error Behavior

What to do if kafka threw an exception while polling for new messages. There are 5 enums and the value can be one of DISCARD, ERROR_HANDLER, RECONNECT, RETRY, STOP

string

"ERROR_HANDLER"

Fields marked with (*) are mandatory.

Usage

This section summarizes how the kafka-not-secured-source can be used in various contexts.

Knative Source

The kafka-not-secured-source Kamelet can be used as Knative source by binding it to a Knative object.

kafka-not-secured-source-binding.yaml
apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
  name: kafka-not-secured-source-binding
spec:
  source:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1alpha1
      name: kafka-not-secured-source
    properties:
      brokers: "The Brokers"
      topic: "The Topic Names"
  sink:
    ref:
      kind: Channel
      apiVersion: messaging.knative.dev/v1
      name: mychannel

Make sure you have Camel K installed into the Kubernetes cluster you’re connected to.

Save the kafka-not-secured-source-binding.yaml file into your hard drive, then configure it according to your needs.

You can run the source using the following command:

kubectl apply -f kafka-not-secured-source-binding.yaml

Dependencies

The Kamelet needs the following dependencies:

  • camel:kafka

  • camel:kamelet

Binding to Knative using the Kamel CLI:

The procedure described above can be simplified into a single execution of the kamel bind command:

kamel bind kafka-not-secured-source -p "source.brokers=The Brokers" -p "source.topic=The Topic Names" channel:mychannel

This will create the KameletBinding under the hood and apply it to the current namespace in the cluster.

Kafka Source

The kafka-not-secured-source Kamelet can be used as Kafka source by binding it to a Kafka topic.

kafka-not-secured-source-binding.yaml
apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
  name: kafka-not-secured-source-binding
spec:
  source:
    ref:
      kind: Kamelet
      apiVersion: camel.apache.org/v1alpha1
      name: kafka-not-secured-source
    properties:
      brokers: "The Brokers"
      topic: "The Topic Names"
  sink:
    ref:
      kind: KafkaTopic
      apiVersion: kafka.strimzi.io/v1beta1
      name: my-topic

Ensure that you’ve installed Strimzi and created a topic named my-topic in the current namespace. Make also sure you have Camel K installed into the Kubernetes cluster you’re connected to.

Save the kafka-not-secured-source-binding.yaml file into your hard drive, then configure it according to your needs.

You can run the source using the following command:

kubectl apply -f kafka-not-secured-source-binding.yaml

Binding to Kafka using the Kamel CLI:

The procedure described above can be simplified into a single execution of the kamel bind command:

kamel bind kafka-not-secured-source -p "source.brokers=The Brokers" -p "source.topic=The Topic Names" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic

This will create the KameletBinding under the hood and apply it to the current namespace in the cluster.