The prevalence of graph-based data has spurred the rapid development of graph neural networks (GNNs) and related machine learning algorithms. These methods extend convolutions to graphs either in the spatial domain as a localized averaging operator or in the spectral domain via the eigendecomposition of a suitable Laplacian. However, most popular GNNs have two limitations. i) The filters used in these networks are essentially low-pass filters (i.e. averaging operators). This leads to the so called ``oversmoothing problem'' and the loss of high-frequency information. ii) If the graph is directed, as is the case in many applications including citation, website, and traffic networks, these networks are unable to effectively encode directional information. In this talk, discuss how we can overcome these limitations via i) the graph scattering transform, which uses band-pass filters rather than low-pass, and ii) MagNet, a network designed for directed graphs based on a complex Hermitian matrix known as the magnetic Laplacian.