Offline and online participation
IEEE technical sponsorship approved!
Moscow, Leninsky Gory, 1, bldg. 52, Northern Entrance,
Rooms П8, П8a
In the talk we describe the very recent result concern convex non-smooth problems, and problems with variance reduction.
I will speak about several approaches to constructing graph and hypergraph models for such complex networks as Internet, social, biological and economic networks, etc.
Matrix decompositions and learning from big data
Even most powerful supercomputers cannot deal directly with astronomically big data coming as an array of all elements. Any data on that scale can be processed only if it has an agreeable structure reflected by an appropriate representation model.
For instance, if a matrix is known to be well approximated by a matrix of rank r, then the model can be a skeleton decomposition constructed only from some cross of its r columns and r rows. In many cases, the data can be well approximated by some matrix or tensor decompositions to be learned from a small sample of its elements.
How to select these elements? In the talk, we give a sketch of most useful
matrix and tensor decomposition and consider a simple and as well very powerful principle of selections of data on which the model is learned.
Current wireless networks are designed to optimize spectral efficiency for human users, who typically require sustained connections for high-data-rate applications like file transfers and video streaming. However, these networks are increasingly inadequate for the emerging era of machine-type communication. With vast number of devices exhibiting sporadic traffic patterns consisting of short packets, the grant-based multiple access procedures utilized by existing networks lead to significant delays and inefficiencies. To address this issue the unsourced random access (URA) paradigm has been proposed. This paradigm assumes the devices to share a common encoder thus simplifying the reception process by eliminating the identification procedure. It is worth mentioning that the URA problem formulation allows for a standard compressed sensing (CS) interpretation. To be precise, our problem is the approximate support recovery problem (ASR), as we only need to find the support up to some distortion defined by the desired per-user probability of error. Utilizing this relation in this talk we provide fundamental limits and practical schemes for the URA problem.
Principles of constructing all-optical networks based on optical circuit switching, optical burst switching, and optical packet switching, as well as prospects for their development. Physical foundations of functioning, operating algorithms and mathematical methods for analyzing the characteristics of basic elements of all-optical communication networks (filters, wave converters, switches, multiplexers and other devices)
Quality of Service Control Challenges in Data Communication Networks
The presentation addresses the challenges of ensuring deterministic Quality of Service (QoS) in data communication networks. The main focus is on methods for optimal channel selection with specified QoS requirements, transport flows balancing and computational tasks distribution in compliance with SLA requirements. The discussed methods are based on a multi-agent approach.
Chair Professor of computer science and engineering at Huazhong University of Science and Technology (HUST) in China, Fellow of IEEE, life member of the ACM
Understanding Computility Net: A Distributed System Perspective
Computility net provides opportunities to transcending state-of-the-art technologiesin AI, computing and network. This talk will delve into the evolution and architectural design of computility net from the perspective of distributed systems, tracing traditional models like grid and cloud computing to modern computility networks. The discussion will then focus on the specific challenges posed by computility net, particularly in terms of managing heterogeneous compute nodes and network connections. Architectural strategies that effectively address these challenges and facilitate optimal resource allocation across multiple computing centers will be explored. Additionally, the talk will indicate the diverse applications of computility networks in areas such as AI, scientific computing, and beyond, demonstrating their wide-reaching impact and potential
AIOT: Industrial Internet of Things
AIOT is a transformative integration of AI with the Industrial Internet of Things (IIoT). This keynote speech will delve into the synergy between advanced data analytics, AI techniques, and the vast network of connected industrial devices. The fusion of AI with IIoT is revolutionizing the way industries operate, offering unprecedented levels of efficiency, predictive maintenance, and so on. The speech will highlight how AIOT is enabling smarter manufacturing through various techniques.
Professor, Executive Dean
School of Computer Science and Technology
University of Science and Technology of China
ACM Fellow, IEEE Fellow
Industrial Intelligence by “Data+Knowledge” & Edge-Cloud Collaboration
The industrial Internet is a new generation of intelligent network formed by the deep integration of industrial production systems, DT, CT, OT, AIOT, and AI. Its core is the integration and application of sensing, analysis, decision-making and control. As the next generation of industrial infrastructure, the industrial Internet will reshape the entire industrial production and manufacturing system, helping to digitalize, network, and intelligentize the industrial production. The core tasks are ubiquitous low-power smart sensing, wireless interconnection of everything, intelligent computing and service. In this report, I will share some of the challenges of industrial intelligence based on industrial knowledge and data, especially in intelligent sensing, edge computing, AI models under knowledge and data fusion, industrial MIP solvers, security and privacy of data and computing. I will then share some preliminary research results and explorations in this area.
Domain Specific language Engineering via Language Lifting
In the "software defines everything" era of information technology, safe and user-friendly domain-specific languages (DSLs) have gained increasing significance. These DSLs facilitate the direct representation of problems and algorithms tailored to specific professional fields, while empowering domain experts to craft code using domain-specific terminology and operations intuitively. However, developing DSLs tailored for domain experts remains a persistent challenge. Traditional embedded DSL design methodologies provide a rapid prototyping technique for DSL implementation. Yet, the close integration between the embedded DSL and the host language poses difficulties for domain experts unfamiliar with the host language. In this talk, I introduce a new approach to support rapid DSL implementation and the development of an IDE for the DSL. This approach incorporates an extensible general-purpose core language, a DSL definition technique leveraging syntactic sugars, and crucially, a language lifting for generating DSL implementations alongside an IDE optimized for DSL programming. We have devised a system, named Osazone, and employed it to create numerous DSLs, thereby demonstrating the flexibility, efficacy, and practicality of our approach in advancing domain-specific language engineering efforts.