Mathematics Faculty WorksCopyright (c) 2017 Loyola Marymount University and Loyola Law School All rights reserved.
http://digitalcommons.lmu.edu/math_fac
Recent documents in Mathematics Faculty Worksen-usThu, 12 Jan 2017 01:31:42 PST3600The regularity of the boundary of a multidimensional aggregation patch
http://digitalcommons.lmu.edu/math_fac/93
http://digitalcommons.lmu.edu/math_fac/93Tue, 10 Jan 2017 17:15:33 PST
We consider solutions to the aggregation equation with Newtonian potential where the initial data are the characteristic function of a domain with boundary of class $C^{1+\gamma}$ ,$0<\gamma<1$. Such initial data are known to yield a solution that, going forward in time, retains a patch-like structure with a constant time-dependent density inside an evolving region, which collapses on itself in a finite time, and which, going backward in time, converges in an $L^1$ sense to a self-similar expanding ball solution. In this work, we prove $C^{1+\gamma}$ regularity of the domain's boundary on the time interval on which the solution exists as an $L^\infty$ patch, up to the collapse time going forward in time and for all finite times going backward in time.
]]>
Andrea L. Bertozzi et al.Enhanced Lasso Recovery on Graph
http://digitalcommons.lmu.edu/math_fac/92
http://digitalcommons.lmu.edu/math_fac/92Tue, 10 Jan 2017 17:15:26 PST
This work aims at recovering signals that are sparse on graphs. Compressed sensing offers techniques for signal recovery from a few linear measurements and graph Fourier analysis provides a signal representation on graph. In this paper, we leverage these two frameworks to introduce a new Lasso recovery algorithm on graphs. More precisely, we present a non-convex, non-smooth algorithm that outperforms the standard convex Lasso technique. We carry out numerical experiments on three benchmark graph datasets.
]]>
Xavier Bresson et al.An Incremental Reseeding Strategy for Clustering
http://digitalcommons.lmu.edu/math_fac/91
http://digitalcommons.lmu.edu/math_fac/91Tue, 10 Jan 2017 17:15:21 PST
In this work we propose a simple and easily parallelizable algorithm for multiway graph partitioning. The algorithm alternates between three basic components: diffusing seed vertices over the graph, thresholding the diffused seeds, and then randomly reseeding the thresholded clusters. We demonstrate experimentally that the proper combination of these ingredients leads to an algorithm that achieves state-of-the-art performance in terms of cluster purity on standard benchmarks datasets. Moreover, the algorithm runs an order of magnitude faster than the other algorithms that achieve comparable results in terms of accuracy. We also describe a coarsen, cluster and refine approach similar to GRACLUS and METIS that removes an additional order of magnitude from the runtime of our algorithm while still maintaining competitive accuracy.
]]>
Xavier Bresson et al.A Method Based on Total Variation for Network Modularity Optimization using the MBO Scheme
http://digitalcommons.lmu.edu/math_fac/90
http://digitalcommons.lmu.edu/math_fac/90Tue, 10 Jan 2017 17:15:15 PST
The study of network structure is pervasive in sociology, biology, computer science, and many other disciplines. One of the most important areas of network science is the algorithmic detection of cohesive groups of nodes called “communities.” One popular approach to finding communities is to maximize a quality function known as modularity to achieve some sort of optimal clustering of nodes. In this paper, we interpret the modularity function from a novel perspective: we reformulate modularity optimization as a minimization problem of an energy functional that consists of a total variation term and an $\ell_2$ balance term. By employing numerical techniques from image processing and $\ell_1$ compressive sensing---such as convex splitting and the Merriman--Bence--Osher (MBO) scheme---we develop a variational algorithm for the minimization problem. We present our computational results using both synthetic benchmark networks and real data.
]]>
Huiyi Hu et al.An Adaptive Total Variation Algorithm for Computing the Balanced Cut of a Graph
http://digitalcommons.lmu.edu/math_fac/89
http://digitalcommons.lmu.edu/math_fac/89Tue, 10 Jan 2017 17:15:11 PST
We propose an adaptive version of the total variation algorithm proposed in [3] for computing the balanced cut of a graph. The algorithm from [3] used a sequence of inner total variation minimizations to guarantee descent of the balanced cut energy as well as convergence of the algorithm. In practice the total variation minimization step is never solved exactly. Instead, an accuracy parameter is specified and the total variation minimization terminates once this level of accuracy is reached. The choice of this parameter can vastly impact both the computational time of the overall algorithm as well as the accuracy of the result. Moreover, since the total variation minimization step is not solved exactly, the algorithm is not guarantied to be monotonic. In the present work we introduce a new adaptive stopping condition for the total variation minimization that guarantees monotonicity. This results in an algorithm that is actually monotonic in practice and is also significantly faster than previous, non-adaptive algorithms.
]]>
Xavier Bresson et al.Convergence of a Steepest Descent Algorithm for Ratio Cut Clustering
http://digitalcommons.lmu.edu/math_fac/88
http://digitalcommons.lmu.edu/math_fac/88Tue, 10 Jan 2017 17:15:06 PST
Unsupervised clustering of scattered, noisy and high-dimensional data points is an important and difficult problem. Tight continuous relaxations of balanced cut problems have recently been shown to provide excellent clustering results. In this paper, we present an explicit-implicit gradient flow scheme for the relaxed ratio cut problem, and prove that the algorithm converges to a critical point of the energy. We also show the efficiency of the proposed algorithm on the two moons dataset.
]]>
Xavier Bresson et al.Characterization of radially symmetric finite time blowup in multidimensional aggregation equations
http://digitalcommons.lmu.edu/math_fac/87
http://digitalcommons.lmu.edu/math_fac/87Tue, 10 Jan 2017 17:15:02 PST
This paper studies the transport of a mass $\mu$ in $\mathbb{R}^d, d \geq 2,$ by a flow field $v= -\nabla K*\mu$. We focus on kernels $K=|x|^\alpha/ \alpha$ for $2-d\leq \alpha<2$ for which the smooth densities are known to develop singularities in finite time. For this range we prove the existence for all time of radially symmetric measure solutions that are monotone decreasing as a function of the radius, thus allowing for continuation of the solution past the blowup time. The monotone constraint on the data is consistent with the typical blowup profiles observed in recent numerical studies of these singularities. We prove monotonicity is preserved for all time, even after blowup, in contrast to the case $\alpha >2$ where radially symmetric solutions are known to lose monotonicity. In the case of the Newtonian potential ($\alpha=2-d$), under the assumption of radial symmetry the equation can be transformed into the inviscid Burgers equation on a half line. This enables us to prove preservation of monotonicity using the classical theory of conservation laws. In the case $2 -d < \alpha < 2$ and at the critical exponent p we exhibit initial data in $L^p$ for which the solution immediately develops a Dirac mass singularity. This extends recent work on the local ill-posedness of solutions at the critical exponent.
]]>
Andrea L. Bertozzi et al.Simulation of the Sampling Distribution of the Mean Can Mislead
http://digitalcommons.lmu.edu/math_fac/86
http://digitalcommons.lmu.edu/math_fac/86Tue, 10 Jan 2017 13:34:02 PST
Although the use of simulation to teach the sampling distribution of the mean is meant to provide students with sound conceptual understanding, it may lead them astray. We discuss a misunderstanding that can be introduced or reinforced when students who intuitively understand that “bigger samples are better” conduct a simulation to explore the effect of sample size on the properties of the sampling distribution of the mean. From observing the patterns in a typical series of simulated sampling distributions constructed with increasing sample sizes, students reasonably—but incorrectly—conclude that, as the sample size, n, increases, the mean of the (exact) sampling distribution tends to get closer to the population mean and its variance tends to get closer to 𝜎^{2} / 𝑛, where 𝜎^{2} is the population variance. We show that the patterns students observe are a consequence of the fact that both the variability in the mean and the variability in the variance of simulated sampling distributions constructed from the means of N random samples are inversely related, not only to N, but also to the size of each sample, n. Further, asking students to increase the number of repetitions, N, in the simulation does not change the patterns.
]]>
Ann E. Watkins et al.How well do the NSF Funded Elementary Mathematics Curricula align with the GAISE report recommendations?
http://digitalcommons.lmu.edu/math_fac/85
http://digitalcommons.lmu.edu/math_fac/85Tue, 10 Jan 2017 13:33:58 PST
Statistics and probability have become an integral part of mathematics education. Therefore it is important to understand whether curricular materials adequately represent statistical ideas. The Guidelines for Assessment and Instruction in Statistics Education (GAISE) report (Franklin, Kader, Mewborn, Moreno, Peck, Perry, & Scheaffer, 2007), endorsed by the American Statistical Association, provides a two-dimensional (process and level) framework for statistical learning. This paper examines whether the statistics content contained in the NSF funded elementary curricula Investigations in Number, Data, and Space, Math Trailblazers, and Everyday Mathematics aligns with the GAISE recommendations. Results indicate that there are differences in the approaches used as well as the GAISE components emphasized among the curricula. In light of the fact that the new Common Core State Standards have placed little emphasis in statistics in the elementary grades, it is important to ensure that the minimal amount of statistics that is presented aligns well with the recommendations put forth by the statistics community. The results in this paper provide insight as to the type of statistical preparation students receive when using the NSF funded elementary curricula. As the Common Core places great emphasis on statistics in the middle grades, these results can be used to inform whether students will be prepared for the middle school Common Core goals.
]]>
Anna E. BargagliottiInjury-Initiated Clot Formation Under Flow: A Mathematical Model with Warfarin Treatment
http://digitalcommons.lmu.edu/math_fac/84
http://digitalcommons.lmu.edu/math_fac/84Tue, 10 Jan 2017 11:55:31 PST
The formation of a thrombus (commonly referred to as a blood clot) can potentially pose a severe health risk to an individual, particularly when a thrombus is large enough to impede blood flow. If an individual is considered to be at risk for forming a thrombus, he/she may be prophylactically treated with anticoagulant medication such as warfarin. When an individual is treated with warfarin, a blood test that measures clotting times must be performed. The test yields a number known as the International Normalized Ratio (INR). The INR test must be performed on an individual on a regular basis (e.g., monthly) to ensure that warfarin’s anticoagulation action is targeted appropriately. In this work, we explore the conditions under which an injury-induced thrombus may form in vivo even when the in vitro test shows the appropriate level of anticoagulation action by warfarin. We extend previous models to describe the in vitro clotting time test, as well as thrombus formation in vivo with warfarin treatments. We present numerical simulations that compare scenarios in which warfarin doses and flow rates are modified within biological ranges. Our results indicate that traditional INR measurements may not accurately reflect in vivo clotting times.
]]>
Lisette dePillis et al.Drawing a Triangle on the Thurston Model of Hyperbolic Space
http://digitalcommons.lmu.edu/math_fac/83
http://digitalcommons.lmu.edu/math_fac/83Tue, 10 Jan 2017 09:29:02 PST
In looking at a common physical model of the hyperbolic plane, the authors encountered surprising difficulties in drawing a large triangle. Understanding these difficulties leads to an intriguing exploration of the geometry of the Thurston model of the hyperbolic plane. In this exploration we encounter topics ranging from combinatorics and Pick’s Theorem to differential geometry and the Gauss-Bonnet Theorem.
]]>
Curtis D. Bennett et al.Uniform Approximation of Continuous Functions on a Compact Riemann Surface by Elliptic Modular Forms
http://digitalcommons.lmu.edu/math_fac/82
http://digitalcommons.lmu.edu/math_fac/82Mon, 09 Jan 2017 18:19:31 PST
We show that the graded algebra of elliptic modular forms and their conjugates comprises a uniformly dense subspace of the space of all continuous functions on the compactification of the fundamental domain for the action of SL_{2}(Z) on the complex upper half-plane by fractional linear transformations.
]]>
Michael Berg