Class and Description |
---|
org.apache.kafka.streams.kstream.KStreamBuilder
Use
StreamsBuilder instead |
org.apache.kafka.streams.processor.TopologyBuilder
use
Topology instead |
Exceptions and Description |
---|
org.apache.kafka.common.errors.GroupCoordinatorNotAvailableException
As of Kafka 0.11, this has been replaced by
CoordinatorNotAvailableException |
org.apache.kafka.common.errors.GroupLoadInProgressException
As of Kafka 0.11, this has been replaced by
CoordinatorLoadInProgressException |
org.apache.kafka.common.errors.NotCoordinatorForGroupException
As of Kafka 0.11, this has been replaced by
NotCoordinatorException |
org.apache.kafka.streams.errors.TopologyBuilderException
use
Topology instead of TopologyBuilder |
Field and Description |
---|
org.apache.kafka.streams.StreamsConfig.KEY_SERDE_CLASS_CONFIG
Use
StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG instead. |
org.apache.kafka.streams.StreamsConfig.TIMESTAMP_EXTRACTOR_CLASS_CONFIG |
org.apache.kafka.streams.StreamsConfig.VALUE_SERDE_CLASS_CONFIG
Use
StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG instead. |
org.apache.kafka.streams.StreamsConfig.ZOOKEEPER_CONNECT_CONFIG
Kakfa Streams does not use Zookeeper anymore and this parameter will be ignored.
|
Method and Description |
---|
org.apache.kafka.clients.consumer.ConsumerRecord.checksum()
As of Kafka 0.11.0. Because of the potential for message format conversion on the broker, the
checksum returned by the broker may not match what was computed by the producer.
It is therefore unsafe to depend on this checksum for end-to-end delivery guarantees. Additionally,
message format v2 does not include a record-level checksum (for performance, the record checksum
was replaced with a batch checksum). To maintain compatibility, a partial checksum computed from
the record timestamp, serialized key size, and serialized value size is returned instead, but
this should not be depended on for end-to-end reliability.
|
org.apache.kafka.clients.producer.RecordMetadata.checksum()
As of Kafka 0.11.0. Because of the potential for message format conversion on the broker, the
computed checksum may not match what was stored on the broker, or what will be returned to the consumer.
It is therefore unsafe to depend on this checksum for end-to-end delivery guarantees. Additionally,
message format v2 does not include a record-level checksum (for performance, the record checksum
was replaced with a batch checksum). To maintain compatibility, a partial checksum computed from
the record timestamp, serialized key size, and serialized value size is returned instead, but
this should not be depended on for end-to-end reliability.
|
org.apache.kafka.streams.kstream.KTable.foreach(ForeachAction<? super K, ? super V>)
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.foreach(ForeachAction) } on the result. |
org.apache.kafka.streams.StreamsConfig.keySerde() |
org.apache.kafka.connect.sink.SinkTask.onPartitionsAssigned(Collection<TopicPartition>)
Use
SinkTask.open(Collection) for partition initialization. |
org.apache.kafka.connect.sink.SinkTask.onPartitionsRevoked(Collection<TopicPartition>)
Use
SinkTask.close(Collection) instead for partition cleanup. |
org.apache.kafka.clients.consumer.NoOffsetForPartitionException.partition()
please use
NoOffsetForPartitionException.partitions |
org.apache.kafka.streams.kstream.KTable.print()
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.print() on the result. |
org.apache.kafka.streams.kstream.KTable.print(Serde<K>, Serde<V>)
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.print(Serde, Serde) on the result. |
org.apache.kafka.streams.kstream.KTable.print(Serde<K>, Serde<V>, String)
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.print(Serde, Serde, String) on the result. |
org.apache.kafka.streams.kstream.KTable.print(String)
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.print(String) on the result. |
org.apache.kafka.streams.kstream.Transformer.punctuate(long)
Please use
Punctuator functional interface instead. |
org.apache.kafka.streams.kstream.ValueTransformer.punctuate(long)
Please use
Punctuator functional interface instead. |
org.apache.kafka.streams.processor.Processor.punctuate(long)
Please use
Punctuator functional interface instead. |
org.apache.kafka.streams.processor.ProcessorContext.schedule(long)
Please use
ProcessorContext.schedule(long, PunctuationType, Punctuator) instead. |
org.apache.kafka.streams.StreamsConfig.valueSerde() |
org.apache.kafka.streams.kstream.KTable.writeAsText(String)
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.writeAsText(String) } on the result. |
org.apache.kafka.streams.kstream.KTable.writeAsText(String, Serde<K>, Serde<V>)
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.writeAsText(String, Serde, Serde) } on the result. |
org.apache.kafka.streams.kstream.KTable.writeAsText(String, String)
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.writeAsText(String, String) } on the result. |
org.apache.kafka.streams.kstream.KTable.writeAsText(String, String, Serde<K>, Serde<V>)
Use the Interactive Queries APIs (e.g.,
KafkaStreams.store(String, QueryableStoreType)
followed by ReadOnlyKeyValueStore.all() ) to iterate over the keys of a KTable. Alternatively
convert to a KStream using toStream() and then use KStream.writeAsText(String, String, Serde, Serde) } on the result. |