The FeGAN system enables training GANs in the Federated Learning setup. FeGAN is implemented on PyTorch. FeGAN improves the number of devices that hold the data (hundreds), allows devices with different CPU power to be used, and minimizes network traffic. The setup is that a central server communicates with the devices to create a GAN. Each device only sends updates to the model to the server. This means that the server never sees the actual data.
Machine Learning (ML) solutions are nowadays distributed, according to the so-called server/worker architecture. One server holds the model parameters while several workers train the model. Clearly, such architecture is prone to various types of …
AggregaThor is the first framework to provide Byzantine resilience to machine learning applications. It is built on top of TensorFlow, and it shows low overhead compared to vanilla, non-robust competitors.
In this work, we propose CSCR, a channel selection scheme for cooperation-based routing protocols in cognitive radio networks. The proposed scheme increases the spectrum utilization through integrating the channels selection in the route discovery …
The validation of wireless communications research, whether it is focused on PHY, MAC, or higher layers, can be done in several ways, each with its limitations. Simulations tend to be simplified. Equipping wireless labs requires funding and time. …
We propose a primary user-aware k-hop routing scheme that can be plugged into any cognitive radio network routing protocol to adapt, in real time, to the environmental changes. The main use of this scheme is to make the compromise required between …
Network coding has proved its efficiency in increasing the network performance for traditional ad-hoc networks. In this paper, we investigate using network coding for enhancing the throughput of multi-hop cognitive radio networks. We formulate the …