Realizing Next Generation AI-Enabled Wireless XR Network Systems and Integrated Societal Applications

Research Statement Outline

My core ECE/CS research program comprises two synergistic thrusts: (i) NextG mobile virtual and augmented reality (XR) network systems [1-8], and (ii) Domain-aware fast reinforcement and deep learning [9-12]. Within this context, I explore communication-computation-representation-learning challenges and fundamental trade-offs of immersive media and machine learning models, aiming to realize future AI-enabled integrated wireless network systems and transformative XR/IoT applications that can help advance our society, and feature high performance and intelligent self-coordination. These advances can also help realize the highly-anticipated metaverse that will interconnect virtual and physical spaces in unique and transformative ways, and enable novel experiences and applications. Likewise, our advances in domain-aware learning and neural computation integration can notably enhance the efficiency and scope of NextG IoT and multimedia systems. My studies integrate other emerging technologies: Millimeter wave and free space optical wireless networking, 5G mobile edge computing, UAV-IoT, and multi-radio access technology (RAT)-enabled scalable 360° video streaming.

Laboratory: I lead the Laboratory for Next Generation AI-Enabled Wireless XR Network Systems and Integrated Societal Applications at NJIT. The lab comprises undergraduate students, graduate students, postdocs, and developers. It features state-of-the-art equipment: Immersive 3D displays, visual/range IoT sensors, wireless XR headsets, millimeter wave and free optical transceivers, IoT drones, high resolution 360° cameras, powerful GPU-equipped server computers, and 5G SDR boards. We always welcome having interns and visitors hosted in the lab on a short-term or longer-term basis.

Research funding: The generous support of the NSF, NIH, AFOSR, Adobe, NVIDIA, Microsoft, and Tencent is gratefully acknowledged.

Link to past projects.


References

  1. J. Chakareski and M. Khan, "Live 360° video streaming to heterogeneous clients in 5G networks," IEEE Trans. Multimedia, Mar. 2024, accepted.
  2. S. Srinivasan, S. Shippey, E. Aryafar, and J. Chakareski, "FBDT: Forward and backward data transmission across multiple RATs for high quality mobile VR 360° video streaming," in Proc. ACM MMSys, June 2023.
  3. Chakareski, M. Khan, T. Ropitault, and S. Blandino, "Millimeter wave and free-space-optics for future dual-connectivity 6DOF mobile multi-user VR streaming," ACM TOMCCAP, Feb. 2023.
  4. S. Gupta, J. Chakareski, and P. Popovski, "mmWave networking and edge computing for scalable 360° video multi-user virtual reality," IEEE Trans. Image Processing, Dec. 2022.
  5. J. Chakareski, M. Khan, and M. Yuksel, "Towards enabling next generation societal virtual reality applications for virtual human teleportation," IEEE Signal Processing Magazine, Sept. 2022.
  6. J. Chakareski, "Viewport-adaptive scalable multi-user virtual reality mobile-edge streaming," IEEE Trans. Image Processing, Dec. 2020.
  7. J. Chakareski, “UAV-IoT for next generation virtual reality,” IEEE Trans. Image Processing, Dec. 2019.
  8. J. Sun, N. Sharma, J. Chakareski, N. Mastronarde, and Y. Lao, "Stochastic computing and hardware acceleration for post-decision state reinforcement learning in IoT systems," IEEE IoT Journal, June 2022.
  9. N. Mastronarde, N. Sharma, and J. Chakareski, "Improving data-driven reinforcement learning in wireless IoT systems using domain knowledge," IEEE Communications Magazine, Nov. 2021.
  10. N. Sharma, N. Mastronarde, and J. Chakareski, “Accelerated structure-aware reinforcement learning for delay-sensitive energy harvesting wireless sensors,” IEEE Trans. Signal Processing, Dec. 2020.

back