/
Mar 2023 Ziyang Guo (Huawei) Mar 2023 Ziyang Guo (Huawei)

Mar 2023 Ziyang Guo (Huawei) - PowerPoint Presentation

joanne
joanne . @joanne
Follow
27 views
Uploaded On 2024-02-03

Mar 2023 Ziyang Guo (Huawei) - PPT Presentation

Slide 1 Study on AI CSI Compression Date 202303 Authors Mar 2023 Ziyang Guo Huawei Slide 2 Abstract In this contribution we review some existing works on AI CSI compression ID: 1044527

guo mar 2023ziyang huawei mar guo huawei 2023ziyang slide neural csi network feedback compression gain overhead matrix givens goodput

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Mar 2023 Ziyang Guo (Huawei)" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Mar 2023Ziyang Guo (Huawei)Slide 1Study on AI CSI CompressionDate: 2023-03Authors:

2. Mar 2023Ziyang Guo (Huawei)Slide 2AbstractIn this contribution, we review some existing works on AI CSI compression,introduce a new vector quantization variational autoencoder (VQ-VAE) method for CSI compression,discuss its performance and possible future work.

3. Mar 2023Ziyang Guo (Huawei)Slide 3BackgroundThe AP initiates the sounding sequence by transmitting the NDPA frame followed by a NDP which is used for the generation of V matrix at the STA.The STA applies Givens rotation on the V matrix and feeds back the angels in the beamforming report frame.Ntx=Nrx=NssBW=20MHzBW=40MHzBW=80MHzBW=160MHzBW=320MHz20.12 (KBytes)0.240.501.001.9940.731.452.995.9811.9583.396.7813.9427.8955.781614.5229.0459.76119.52239.04The total feedback overhead is . Larger bandwidth and number of antennas lead to significantly increased sounding feedback overhead, which increases the latency and limits the throughput gain.Visualization of the precoding matrix after FFT shows its sparsity and compressibility. 20MHz, 8*2

4. Mar 2023Ziyang Guo (Huawei)Slide 4Existing Work on AI CSI CompressionML solutions: no neural network[1][2] adopted a traditional machine learning algorithm, i.e., K-means, to cluster the angle vector after Givens rotationBeamformer and beamformee need to exchange and store the centroidsOnly transmit the centroid index during inference2dB PER loss, up to 50% goodput improvementAI solutions: use neural network[3] adopted two autoencoders to compress two types of angles after Givens rotation separatelyBeamformer and beamformee need to exchange the store neural network modelsOnly transmit the encoder output during inferenceUp to 70% overhead reduction and 60% throughput gain for 11ac system

5. Mar 2023Ziyang Guo (Huawei)Slide 5Our Study on AI CSI CompressionVector quantization variational autoencoder (VQVAE) [4] is adopted for CSI compressionConsists of encoder, codebook, decoderLearn how to compress and quantize automatically from the dataConvolutional neural network (CNN) or transformer could be used for both the encoder and decoder.Input of NN could be the V matrix or the angles after Givens rotation.Beamformer and beamformee need to exchange and store the codebook and half of the NN model.Only transmit the codeword index during inference.

6. Simulation setup: Training data are generated under SU MIMO, channel D NLOS, BW=80MHz, Ntx=8, Nrx=2, Nss=2, Ng=4TNDPA=28us, TNDP=112us, TSIFS=16us, Tpreamble=64us, MCS=1 for BF report, MCS=7 for data, payload length=1000BytesComparison Baseline: current methods in the standard, Ng=4 (250 subcarriers) and Ng=16 (64 subcarriers) with and Performance Metric:Goodput: GP = Compression ratio: Rc = SNR-PER curve: target PER is 10-2 Mar 2023Ziyang Guo (Huawei)Slide 6Performance EvaluationNDPANDPBFDataACKSIFSSIFSSIFSSIFS

7. Mar 2023Ziyang Guo (Huawei)Slide 7Performance EvaluationMethodoverheadNg=4 (bits)overheadNg=16 (bits)overheadVQVAE (bits)Rcvs Ng=4Rcvs Ng=16Loss @ 0.01 PER (dB)vs Ng=4loss @ 0.01 PER (dB)vs Ng=16GP Ng=4 (Mbps)GP Ng=16 (Mbps)GP AI(Mbps)GP gain (%)vs Ng=4GP gain (%)vs Ng=16VQVAE-1325008320256012.703.250.1605.0710.7714.70189.6436.48VQVAE-2325008320128025.396.500.50.45.0710.7716.00215.2048.53

8. Further StudySlide 8Ziyang Guo (Huawei)Mar 2023Improve the goodput and reduce the feedback overheadDifferent neural network architectureReduce codebook size and dimensionMore complex scenariosMore simulations under different configurationsMU-MIMO scenariosIncrease model generalizationOne neural network can adapt to different channel modelsOne neural network can adapt to different bandwidth and number of antennas

9. SummaryIn this contribution, we reviewed the existing works on AI CSI compression,introduced a new VQ-VAE CSI compression scheme,showed its performance gain,and discussed possible future work to further improve the goodput and reduce the feedback overhead.Mar 2023Ziyang Guo (Huawei)Slide 9

10. Mar 2023Ziyang Guo (Huawei)Slide 10References[1] M. Deshmukh, Z. Lin, H. Lou, M. Kamel, R. Yang, I. Güvenç, “Intelligent Feedback Overhead Reduction (iFOR) in Wi-Fi 7 and Beyond,” in Proceedings of 2022 VTC-Spring[2] 11-22-1563-02-aiml-ai-ml-use-case[3] P. K. Sangdeh, H. Pirayesh, A. Mobiny, H. Zeng, “LB-SciFi: Online Learning-Based Channel Feedback for MU-MIMO in Wireless LANs, ” in Proceedings of 2020 IEEE 28th ICNP[4] A. Oord, O. Vinyals, “Neural discrete representation learning,” Advances in neural information processing systems, 2017.