Class KafkaProducer<K,V>
- java.lang.Object
-
- io.vertx.rxjava3.kafka.client.producer.KafkaProducer<K,V>
-
- All Implemented Interfaces:
io.vertx.lang.rx.RxDelegate,StreamBase,WriteStream<KafkaProducerRecord<K,V>>
public class KafkaProducer<K,V> extends Object implements io.vertx.lang.rx.RxDelegate, WriteStream<KafkaProducerRecord<K,V>>
Vert.x Kafka producer.The
WriteStream.write(T)provides global control over writing a record.NOTE: This class has been automatically generated from the
originalnon RX-ified interface using Vert.x codegen.
-
-
Field Summary
Fields Modifier and Type Field Description static io.vertx.lang.rx.TypeArg<KafkaProducer>__TYPE_ARGio.vertx.lang.rx.TypeArg<K>__typeArg_0io.vertx.lang.rx.TypeArg<V>__typeArg_1
-
Constructor Summary
Constructors Constructor Description KafkaProducer(KafkaProducer delegate)KafkaProducer(Object delegate, io.vertx.lang.rx.TypeArg<K> typeArg_0, io.vertx.lang.rx.TypeArg<V> typeArg_1)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description CompletableabortTransaction()Aborts the ongoing transaction.CompletablebeginTransaction()Starts a new kafka transaction.Completableclose()Close the producerCompletableclose(long timeout)Close the producerCompletablecommitTransaction()Commits the ongoing transaction.static <K,V>
KafkaProducer<K,V>create(Vertx vertx, Map<String,String> config)Create a new KafkaProducer instancestatic <K,V>
KafkaProducer<K,V>create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)Create a new KafkaProducer instancestatic <K,V>
KafkaProducer<K,V>create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer)Create a new KafkaProducer instance from a native .static <K,V>
KafkaProducer<K,V>create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer, KafkaClientOptions options)Create a new KafkaProducer instance from a native .static <K,V>
KafkaProducer<K,V>createShared(Vertx vertx, String name, KafkaClientOptions options)Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samenamestatic <K,V>
KafkaProducer<K,V>createShared(Vertx vertx, String name, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samenamestatic <K,V>
KafkaProducer<K,V>createShared(Vertx vertx, String name, Map<String,String> config)Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samenamestatic <K,V>
KafkaProducer<K,V>createShared(Vertx vertx, String name, Map<String,String> config, Class<K> keyType, Class<V> valueType)Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samenameKafkaProducer<K,V>drainHandler(Handler<Void> handler)Set a drain handler on the stream.Completableend()Ends the stream.Completableend(KafkaProducerRecord<K,V> data)Same asWriteStream.end()but writes some data to the stream before ending.booleanequals(Object o)KafkaProducer<K,V>exceptionHandler(Handler<Throwable> handler)Set an exception handler on the write stream.Completableflush()Invoking this method makes all buffered records immediately available to writeKafkaProducergetDelegate()inthashCode()CompletableinitTransactions()Initializes the underlying kafka transactional producer.static <K,V>
KafkaProducer<K,V>newInstance(KafkaProducer arg)static <K,V>
KafkaProducer<K,V>newInstance(KafkaProducer arg, io.vertx.lang.rx.TypeArg<K> __typeArg_K, io.vertx.lang.rx.TypeArg<V> __typeArg_V)Single<List<PartitionInfo>>partitionsFor(String topic)Get the partition metadata for the give topic.CompletablerxAbortTransaction()Aborts the ongoing transaction.CompletablerxBeginTransaction()Starts a new kafka transaction.CompletablerxClose()Close the producerCompletablerxClose(long timeout)Close the producerCompletablerxCommitTransaction()Commits the ongoing transaction.CompletablerxEnd()Ends the stream.CompletablerxEnd(KafkaProducerRecord<K,V> data)Same asWriteStream.end()but writes some data to the stream before ending.CompletablerxFlush()Invoking this method makes all buffered records immediately available to writeCompletablerxInitTransactions()Initializes the underlying kafka transactional producer.Single<List<PartitionInfo>>rxPartitionsFor(String topic)Get the partition metadata for the give topic.Single<RecordMetadata>rxSend(KafkaProducerRecord<K,V> record)Asynchronously write a record to a topicCompletablerxWrite(KafkaProducerRecord<K,V> data)Write some data to the stream.Single<RecordMetadata>send(KafkaProducerRecord<K,V> record)Asynchronously write a record to a topicKafkaProducer<K,V>setWriteQueueMaxSize(int i)Set the maximum size of the write queue tomaxSize.WriteStreamObserver<KafkaProducerRecord<K,V>>toObserver()StringtoString()WriteStreamSubscriber<KafkaProducerRecord<K,V>>toSubscriber()Completablewrite(KafkaProducerRecord<K,V> data)Write some data to the stream.booleanwriteQueueFull()This will returntrueif there are more bytes in the write queue than the value set usingsetWriteQueueMaxSize(int)
-
-
-
Field Detail
-
__TYPE_ARG
public static final io.vertx.lang.rx.TypeArg<KafkaProducer> __TYPE_ARG
-
__typeArg_0
public final io.vertx.lang.rx.TypeArg<K> __typeArg_0
-
__typeArg_1
public final io.vertx.lang.rx.TypeArg<V> __typeArg_1
-
-
Constructor Detail
-
KafkaProducer
public KafkaProducer(KafkaProducer delegate)
-
-
Method Detail
-
getDelegate
public KafkaProducer getDelegate()
- Specified by:
getDelegatein interfaceio.vertx.lang.rx.RxDelegate- Specified by:
getDelegatein interfaceStreamBase- Specified by:
getDelegatein interfaceWriteStream<K>
-
toObserver
public WriteStreamObserver<KafkaProducerRecord<K,V>> toObserver()
- Specified by:
toObserverin interfaceWriteStream<K>
-
toSubscriber
public WriteStreamSubscriber<KafkaProducerRecord<K,V>> toSubscriber()
- Specified by:
toSubscriberin interfaceWriteStream<K>
-
write
public Completable write(KafkaProducerRecord<K,V> data)
Write some data to the stream.The data is usually put on an internal write queue, and the write actually happens asynchronously. To avoid running out of memory by putting too much on the write queue, check the
WriteStream.writeQueueFull()method before writing. This is done automatically if using a .When the
datais moved from the queue to the actual medium, the returned will be completed with the write result, e.g the future is succeeded when a server HTTP response buffer is written to the socket and failed if the remote client has closed the socket while the data was still pending for write.- Specified by:
writein interfaceWriteStream<K>- Parameters:
data- the data to write- Returns:
- a future completed with the write result
-
rxWrite
public Completable rxWrite(KafkaProducerRecord<K,V> data)
Write some data to the stream.The data is usually put on an internal write queue, and the write actually happens asynchronously. To avoid running out of memory by putting too much on the write queue, check the
WriteStream.writeQueueFull()method before writing. This is done automatically if using a .When the
datais moved from the queue to the actual medium, the returned will be completed with the write result, e.g the future is succeeded when a server HTTP response buffer is written to the socket and failed if the remote client has closed the socket while the data was still pending for write.- Specified by:
rxWritein interfaceWriteStream<K>- Parameters:
data- the data to write- Returns:
- a future completed with the write result
-
end
public Completable end()
Ends the stream.Once the stream has ended, it cannot be used any more.
- Specified by:
endin interfaceWriteStream<K>- Returns:
- a future completed with the result
-
rxEnd
public Completable rxEnd()
Ends the stream.Once the stream has ended, it cannot be used any more.
- Specified by:
rxEndin interfaceWriteStream<K>- Returns:
- a future completed with the result
-
end
public Completable end(KafkaProducerRecord<K,V> data)
Same asWriteStream.end()but writes some data to the stream before ending.- Specified by:
endin interfaceWriteStream<K>- Parameters:
data- the data to write- Returns:
- a future completed with the result
-
rxEnd
public Completable rxEnd(KafkaProducerRecord<K,V> data)
Same asWriteStream.end()but writes some data to the stream before ending.- Specified by:
rxEndin interfaceWriteStream<K>- Parameters:
data- the data to write- Returns:
- a future completed with the result
-
writeQueueFull
public boolean writeQueueFull()
This will returntrueif there are more bytes in the write queue than the value set usingsetWriteQueueMaxSize(int)- Specified by:
writeQueueFullin interfaceWriteStream<K>- Returns:
trueif write queue is full
-
createShared
public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samenameWhen
closehas been called for each shared producer the resources will be released. Callingendcloses all shared producers.- Parameters:
vertx- Vert.x instance to usename- the producer name to identify itconfig- Kafka producer configuration- Returns:
- an instance of the KafkaProducer
-
createShared
public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, KafkaClientOptions options)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samenameWhen
closehas been called for each shared producer the resources will be released. Callingendcloses all shared producers.- Parameters:
vertx- Vert.x instance to usename- the producer name to identify itoptions- Kafka producer options- Returns:
- an instance of the KafkaProducer
-
createShared
public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, Map<String,String> config, Class<K> keyType, Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samenameWhen
closehas been called for each shared producer the resources will be released. Callingendcloses all shared producers.- Parameters:
vertx- Vert.x instance to usename- the producer name to identify itconfig- Kafka producer configurationkeyType- class type for the key serializationvalueType- class type for the value serialization- Returns:
- an instance of the KafkaProducer
-
createShared
public static <K,V> KafkaProducer<K,V> createShared(Vertx vertx, String name, KafkaClientOptions options, Class<K> keyType, Class<V> valueType)
Get or create a KafkaProducer instance which shares its stream with any other KafkaProducer created with the samenameWhen
closehas been called for each shared producer the resources will be released. Callingendcloses all shared producers.- Parameters:
vertx- Vert.x instance to usename- the producer name to identify itoptions- Kafka producer optionskeyType- class type for the key serializationvalueType- class type for the value serialization- Returns:
- an instance of the KafkaProducer
-
create
public static <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config)
Create a new KafkaProducer instance- Parameters:
vertx- Vert.x instance to useconfig- Kafka producer configuration- Returns:
- an instance of the KafkaProducer
-
create
public static <K,V> KafkaProducer<K,V> create(Vertx vertx, Map<String,String> config, Class<K> keyType, Class<V> valueType)
Create a new KafkaProducer instance- Parameters:
vertx- Vert.x instance to useconfig- Kafka producer configurationkeyType- class type for the key serializationvalueType- class type for the value serialization- Returns:
- an instance of the KafkaProducer
-
initTransactions
public Completable initTransactions()
Initializes the underlying kafka transactional producer. SeeinitTransactions()()}- Returns:
- a future notified with the result
-
rxInitTransactions
public Completable rxInitTransactions()
Initializes the underlying kafka transactional producer. SeeinitTransactions()()}- Returns:
- a future notified with the result
-
beginTransaction
public Completable beginTransaction()
Starts a new kafka transaction. SeebeginTransaction()- Returns:
- a future notified with the result
-
rxBeginTransaction
public Completable rxBeginTransaction()
Starts a new kafka transaction. SeebeginTransaction()- Returns:
- a future notified with the result
-
commitTransaction
public Completable commitTransaction()
Commits the ongoing transaction. SeecommitTransaction()- Returns:
- a future notified with the result
-
rxCommitTransaction
public Completable rxCommitTransaction()
Commits the ongoing transaction. SeecommitTransaction()- Returns:
- a future notified with the result
-
abortTransaction
public Completable abortTransaction()
Aborts the ongoing transaction. SeeKafkaProducer- Returns:
- a future notified with the result
-
rxAbortTransaction
public Completable rxAbortTransaction()
Aborts the ongoing transaction. SeeKafkaProducer- Returns:
- a future notified with the result
-
exceptionHandler
public KafkaProducer<K,V> exceptionHandler(Handler<Throwable> handler)
Description copied from interface:WriteStreamSet an exception handler on the write stream.- Specified by:
exceptionHandlerin interfaceStreamBase- Specified by:
exceptionHandlerin interfaceWriteStream<K>- Parameters:
handler- the exception handler- Returns:
- a reference to this, so the API can be used fluently
-
setWriteQueueMaxSize
public KafkaProducer<K,V> setWriteQueueMaxSize(int i)
Description copied from interface:WriteStreamSet the maximum size of the write queue tomaxSize. You will still be able to write to the stream even if there is more thanmaxSizeitems in the write queue. This is used as an indicator by classes such asPipeto provide flow control. The value is defined by the implementation of the stream, e.g in bytes for aNetSocket, etc...- Specified by:
setWriteQueueMaxSizein interfaceWriteStream<K>- Parameters:
i- the max size of the write stream- Returns:
- a reference to this, so the API can be used fluently
-
drainHandler
public KafkaProducer<K,V> drainHandler(Handler<Void> handler)
Description copied from interface:WriteStreamSet a drain handler on the stream. If the write queue is full, then the handler will be called when the write queue is ready to accept buffers again. SeePipefor an example of this being used.The stream implementation defines when the drain handler, for example it could be when the queue size has been reduced to
maxSize / 2.- Specified by:
drainHandlerin interfaceWriteStream<K>- Parameters:
handler- the handler- Returns:
- a reference to this, so the API can be used fluently
-
send
public Single<RecordMetadata> send(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topic- Parameters:
record- record to write- Returns:
- a
Futurecompleted with the record metadata
-
rxSend
public Single<RecordMetadata> rxSend(KafkaProducerRecord<K,V> record)
Asynchronously write a record to a topic- Parameters:
record- record to write- Returns:
- a
Futurecompleted with the record metadata
-
partitionsFor
public Single<List<PartitionInfo>> partitionsFor(String topic)
Get the partition metadata for the give topic.- Parameters:
topic- topic partition for which getting partitions info- Returns:
- a future notified with the result
-
rxPartitionsFor
public Single<List<PartitionInfo>> rxPartitionsFor(String topic)
Get the partition metadata for the give topic.- Parameters:
topic- topic partition for which getting partitions info- Returns:
- a future notified with the result
-
flush
public Completable flush()
Invoking this method makes all buffered records immediately available to write- Returns:
- a future notified with the result
-
rxFlush
public Completable rxFlush()
Invoking this method makes all buffered records immediately available to write- Returns:
- a future notified with the result
-
close
public Completable close()
Close the producer- Returns:
- a
Futurecompleted with the operation result
-
rxClose
public Completable rxClose()
Close the producer- Returns:
- a
Futurecompleted with the operation result
-
close
public Completable close(long timeout)
Close the producer- Parameters:
timeout-- Returns:
- a future notified with the result
-
rxClose
public Completable rxClose(long timeout)
Close the producer- Parameters:
timeout-- Returns:
- a future notified with the result
-
create
public static <K,V> KafkaProducer<K,V> create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer)
Create a new KafkaProducer instance from a native .- Parameters:
vertx- Vert.x instance to useproducer- the Kafka producer to wrap- Returns:
- an instance of the KafkaProducer
-
create
public static <K,V> KafkaProducer<K,V> create(Vertx vertx, org.apache.kafka.clients.producer.Producer<K,V> producer, KafkaClientOptions options)
Create a new KafkaProducer instance from a native .- Parameters:
vertx- Vert.x instance to useproducer- the Kafka producer to wrapoptions- options used only for tracing settings- Returns:
- an instance of the KafkaProducer
-
newInstance
public static <K,V> KafkaProducer<K,V> newInstance(KafkaProducer arg)
-
newInstance
public static <K,V> KafkaProducer<K,V> newInstance(KafkaProducer arg, io.vertx.lang.rx.TypeArg<K> __typeArg_K, io.vertx.lang.rx.TypeArg<V> __typeArg_V)
-
-