diff --git "a/datasets/test.csv" "b/datasets/test.csv" new file mode 100644--- /dev/null +++ "b/datasets/test.csv" @@ -0,0 +1,132507 @@ +title,abstract +Closed-form Marginal Likelihood in Gamma-Poisson Matrix Factorization," We present novel understandings of the Gamma-Poisson (GaP) model, a +probabilistic matrix factorization model for count data. We show that GaP can +be rewritten free of the score/activation matrix. This gives us new insights +about the estimation of the topic/dictionary matrix by maximum marginal +likelihood estimation. In particular, this explains the robustness of this +estimator to over-specified values of the factorization rank, especially its +ability to automatically prune irrelevant dictionary columns, as empirically +observed in previous work. The marginalization of the activation matrix leads +in turn to a new Monte Carlo Expectation-Maximization algorithm with favorable +properties. +" +Laboratory mid-IR spectra of equilibrated and igneous meteorites. Searching for observables of planetesimal debris," Meteorites contain minerals from Solar System asteroids with different +properties (like size, presence of water, core formation). We provide new +mid-IR transmission spectra of powdered meteorites to obtain templates of how +mid-IR spectra of asteroidal debris would look like. This is essential for +interpreting mid-IR spectra of past and future space observatories, like the +James Webb Space Telescope. We show that the transmission spectra of wet and +dry chondrites, carbonaceous and ordinary chondrites and achondrite and +chondrite meteorites are distinctly different in a way one can distinguish in +astronomical mid-IR spectra. The two observables that spectroscopically +separate the different meteorites groups (and thus the different types of +parent bodies) are the pyroxene-olivine feature strength ratio and the peak +shift of the olivine spectral features due to an increase in the iron +concentration of the olivine. +" +Case For Static AMSDU Aggregation in WLANs," Frame aggregation is a mechanism by which multiple frames are combined into a +single transmission unit over the air. Frames aggregated at the AMSDU level use +a common CRC check to enforce integrity. For longer aggregated AMSDU frames, +the packet error rate increases significantly for the same bit error rate. +Hence, multiple studies have proposed doing AMSDU aggregation adaptively based +on the error rate. This study evaluates if there is a \emph{practical} +advantage in doing adaptive AMSDU aggregation based on the link bit error rate. +Evaluations on a model show that instead of implementing a complex adaptive +AMSDU frame aggregation mechanism which impact queuing and other implementation +aspects, it is easier to influence packet error rate with traditional +mechanisms while keeping the AMSDU aggregation logic simple. +" +The $Gaia$-ESO Survey: the inner disk intermediate-age open cluster NGC 6802," Milky Way open clusters are very diverse in terms of age, chemical +composition, and kinematic properties. Intermediate-age and old open clusters +are less common, and it is even harder to find them inside the solar +Galactocentric radius, due to the high mortality rate and strong extinction +inside this region. NGC 6802 is one of the inner disk open clusters (IOCs) +observed by the $Gaia$-ESO survey (GES). This cluster is an important target +for calibrating the abundances derived in the survey due to the kinematic and +chemical homogeneity of the members in open clusters. Using the measurements +from $Gaia$-ESO internal data release 4 (iDR4), we identify 95 main-sequence +dwarfs as cluster members from the GIRAFFE target list, and eight giants as +cluster members from the UVES target list. The dwarf cluster members have a +median radial velocity of $13.6\pm1.9$ km s$^{-1}$, while the giant cluster +members have a median radial velocity of $12.0\pm0.9$ km s$^{-1}$ and a median +[Fe/H] of $0.10\pm0.02$ dex. The color-magnitude diagram of these cluster +members suggests an age of $0.9\pm0.1$ Gyr, with $(m-M)_0=11.4$ and +$E(B-V)=0.86$. We perform the first detailed chemical abundance analysis of NGC +6802, including 27 elemental species. To gain a more general picture about +IOCs, the measurements of NGC 6802 are compared with those of other IOCs +previously studied by GES, that is, NGC 4815, Trumpler 20, NGC 6705, and +Berkeley 81. NGC 6802 shows similar C, N, Na, and Al abundances as other IOCs. +These elements are compared with nucleosynthetic models as a function of +cluster turn-off mass. The $\alpha$, iron-peak, and neutron-capture elements +are also explored in a self-consistent way. +" +Witness-Functions versus Interpretation-Functions for Secrecy in Cryptographic Protocols: What to Choose?," Proving that a cryptographic protocol is correct for secrecy is a hard task. +One of the strongest strategies to reach this goal is to show that it is +increasing, which means that the security level of every single atomic message +exchanged in the protocol, safely evaluated, never deceases. Recently, two +families of functions have been proposed to measure the security level of +atomic messages. The first one is the family of interpretation-functions. The +second is the family of witness-functions. In this paper, we show that the +witness-functions are more efficient than interpretation-functions. We give a +detailed analysis of an ad-hoc protocol on which the witness-functions succeed +in proving its correctness for secrecy while the interpretation-functions fail +to do so. +" +Pairwise Difference Estimation of High Dimensional Partially Linear Model," This paper proposes a regularized pairwise difference approach for estimating +the linear component coefficient in a partially linear model, with consistency +and exact rates of convergence obtained in high dimensions under mild scaling +requirements. Our analysis reveals interesting features such as (i) the +bandwidth parameter automatically adapts to the model and is actually +tuning-insensitive; and (ii) the procedure could even maintain fast rate of +convergence for $\alpha$-Hölder class of $\alpha\leq1/2$. Simulation studies +show the advantage of the proposed method, and application of our approach to a +brain imaging data reveals some biological patterns which fail to be recovered +using competing methods. +" +Dissecting the multivariate extremal index and tail dependence," A central issue in the theory of extreme values focuses on suitable +conditions such that the well-known results for the limiting distributions of +the maximum of i.i.d. sequences can be applied to stationary ones. In this +context, the extremal index appears as a key parameter to capture the effect of +temporal dependence on the limiting distribution of the maxima. The +multivariate extremal index corresponds to a generalization of this concept to +a multivariate context and affects the tail dependence structure within the +marginal sequences and between them. As it is a function, the inference becomes +more difficult, and it is therefore important to obtain characterizations, +namely bounds based on the marginal dependence that are easier to estimate. In +this work we present two decompositions that emphasize different types of +information contained in the multivariate extremal index, an upper limit better +than those found in the literature and we analyze its role in dependence on the +limiting model of the componentwise maxima of a stationary sequence. We will +illustrate the results with examples of recognized interest in applications. +" +"Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy"," Astrophysics and cosmology are rich with data. The advent of wide-area +digital cameras on large aperture telescopes has led to ever more ambitious +surveys of the sky. Data volumes of entire surveys a decade ago can now be +acquired in a single night and real-time analysis is often desired. Thus, +modern astronomy requires big data know-how, in particular it demands highly +efficient machine learning and image analysis algorithms. But scalability is +not the only challenge: Astronomy applications touch several current machine +learning research questions, such as learning from biased data and dealing with +label and measurement noise. We argue that this makes astronomy a great domain +for computer science research, as it pushes the boundaries of data analysis. In +the following, we will present this exciting application area for data +scientists. We will focus on exemplary results, discuss main challenges, and +highlight some recent methodological advancements in machine learning and image +analysis triggered by astronomical applications. +" +Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog," A number of recent works have proposed techniques for end-to-end learning of +communication protocols among cooperative multi-agent populations, and have +simultaneously found the emergence of grounded human-interpretable language in +the protocols developed by the agents, all learned without any human +supervision! +In this paper, using a Task and Tell reference game between two agents as a +testbed, we present a sequence of 'negative' results culminating in a +'positive' one -- showing that while most agent-invented languages are +effective (i.e. achieve near-perfect task rewards), they are decidedly not +interpretable or compositional. +In essence, we find that natural language does not emerge 'naturally', +despite the semblance of ease of natural-language-emergence that one may gather +from recent literature. We discuss how it is possible to coax the invented +languages to become more and more human-like and compositional by increasing +restrictions on how two agents may communicate. +" +Properties and Origin of Galaxy Velocity Bias in the Illustris Simulation," We use the hydrodynamical galaxy formation simulations from the Illustris +suite to study the origin and properties of galaxy velocity bias, i.e., the +difference between the velocity distributions of galaxies and dark matter +inside halos. We find that galaxy velocity bias is a decreasing function of the +ratio of galaxy stellar mass to host halo mass. In general, central galaxies +are not at rest with respect to dark matter halos or the core of halos, with a +velocity dispersion above 0.04 times that of the dark matter. The central +galaxy velocity bias is found to be mostly caused by the close interactions +between the central and satellite galaxies. For satellite galaxies, the +velocity bias is related to their dynamical and tidal evolution history after +being accreted onto the host halos. It depends on the time after the accretion +and their distances from the halo centers, with massive satellites generally +moving more slowly than the dark matter. The results are in broad agreements +with those inferred from modeling small-scale redshift-space galaxy clustering +data, and the study can help improve models of redshift-space galaxy +clustering. +" +Computer Modeling of Halogen Bonds and Other $σ$-Hole Interactions," In the field of noncovalent interactions a new paradigm has recently become +popular. It stems from the analysis of molecular electrostatic potentials and +introduces a label, which has recently attracted enormous attention. The label +is {\sigma}-hole, and it was first used in connection with halogens. It +initiated a renaissance of interest in halogenated compounds, and later on, +when found also on other groups of atoms (chalcogens, pnicogens, tetrels and +aerogens), it resulted in a new direction of research of intermolecular +interactions. In this review, we summarize advances from about the last 10 +years in understanding those interactions related to {\sigma}-hole. We pay +particular attention to theoretical and computational techniques, which play a +crucial role in the field. +" +Towards Universal End-to-End Affect Recognition from Multilingual Speech by ConvNets," We propose an end-to-end affect recognition approach using a Convolutional +Neural Network (CNN) that handles multiple languages, with applications to +emotion and personality recognition from speech. We lay the foundation of a +universal model that is trained on multiple languages at once. As affect is +shared across all languages, we are able to leverage shared information between +languages and improve the overall performance for each one. We obtained an +average improvement of 12.8% on emotion and 10.1% on personality when compared +with the same model trained on each language only. It is end-to-end because we +directly take narrow-band raw waveforms as input. This allows us to accept as +input audio recorded from any source and to avoid the overhead and information +loss of feature extraction. It outperforms a similar CNN using spectrograms as +input by 12.8% for emotion and 6.3% for personality, based on F-scores. +Analysis of the network parameters and layers activation shows that the network +learns and extracts significant features in the first layer, in particular +pitch, energy and contour variations. Subsequent convolutional layers instead +capture language-specific representations through the analysis of +supra-segmental features. Our model represents an important step for the +development of a fully universal affect recognizer, able to recognize +additional descriptors, such as stress, and for the future implementation into +affective interactive systems. +" +A Variational Characterization of Fluid Sloshing with Surface Tension," We consider the sloshing problem for an incompressible, inviscid, +irrotational fluid in an open container, including effects due to surface +tension on the free surface. We restrict ourselves to a constant contact angle +and seek time-harmonic solutions of the linearized problem, which describes the +time-evolution of the fluid due to a small initial disturbance of the surface +at rest. As opposed to the zero surface tension case, where the problem reduces +to a partial differential equation for the velocity potential, we obtain a +coupled system for the velocity potential and the free surface displacement. We +derive a new variational formulation of the coupled problem and establish the +existence of solutions using the direct method from the calculus of variations. +We prove a domain monotonicity result for the fundamental sloshing eigenvalue. +In the limit of zero surface tension, we recover the variational formulation of +the mixed Steklov-Neumann eigenvalue problem and give the first-order +perturbation formula for a simple eigenvalue. +" +diagnoseIT: Expertengestützte automatische Diagnose von Performance-Probleme in Enterprise-Anwendungen (Abschlussbericht)," This is the final report of the collaborative research project diagnoseIT on +expert-guided automatic diagnosis of performance problems in enterprise +applications. +" +Statistical Inference in Political Networks Research," Researchers interested in statistically modeling network data have a +well-established and quickly growing set of approaches from which to choose. +Several of these methods have been regularly applied in research on political +networks, while others have yet to permeate the field. Here, we review the most +prominent methods of inferential network analysis---both for cross-sectionally +and longitudinally observed networks including (temporal) exponential random +graph models, latent space models, the quadratic assignment procedure, and +stochastic actor oriented models. For each method, we summarize its analytic +form, identify prominent published applications in political science and +discuss computational considerations. We conclude with a set of guidelines for +selecting a method for a given application. +" +Controller-jammer game models of Denial of Service in control systems operating over packet-dropping links," The paper introduces a class of zero-sum games between the adversary and +controller as a scenario for a `denial of service' in a networked control +system. The communication link is modeled as a set of transmission regimes +controlled by a strategic jammer whose intention is to wage an attack on the +plant by choosing a most damaging regime-switching strategy. We demonstrate +that even in the one-step case, the introduced games admit a saddle-point +equilibrium, at which the jammer's optimal policy is to randomize in a region +of the plant's state space, thus requiring the controller to undertake a +nontrivial response which is different from what one would expect in a standard +stochastic control problem over a packet dropping link. The paper derives +conditions for the introduced games to have such a saddle-point equilibrium. +Furthermore, we show that in more general multi-stage games, these conditions +provide `greedy' jamming strategies for the adversary. +" +Conservation of spin supercurrents in superconductors," We demonstrate that spin supercurrents are conserved upon transmission +through a conventional superconductor, even in the presence of spin-dependent +scattering by impurities with magnetic moments or spin-orbit coupling. This is +fundamentally different from conventional spin currents, which decay in the +presence of such scattering, and this has important implications for the usage +of superconducting materials in spintronic hybrid structures. +" +Driver Identification Using Automobile Sensor Data from a Single Turn," As automotive electronics continue to advance, cars are becoming more and +more reliant on sensors to perform everyday driving operations. These sensors +are omnipresent and help the car navigate, reduce accidents, and provide +comfortable rides. However, they can also be used to learn about the drivers +themselves. In this paper, we propose a method to predict, from sensor data +collected at a single turn, the identity of a driver out of a given set of +individuals. We cast the problem in terms of time series classification, where +our dataset contains sensor readings at one turn, repeated several times by +multiple drivers. We build a classifier to find unique patterns in each +individual's driving style, which are visible in the data even on such a short +road segment. To test our approach, we analyze a new dataset collected by AUDI +AG and Audi Electronics Venture, where a fleet of test vehicles was equipped +with automotive data loggers storing all sensor readings on real roads. We show +that turns are particularly well-suited for detecting variations across +drivers, especially when compared to straightaways. We then focus on the 12 +most frequently made turns in the dataset, which include rural, urban, highway +on-ramps, and more, obtaining accurate identification results and learning +useful insights about driver behavior in a variety of settings. +" +Same-different problems strain convolutional neural networks," The robust and efficient recognition of visual relations in images is a +hallmark of biological vision. We argue that, despite recent progress in visual +recognition, modern machine vision algorithms are severely limited in their +ability to learn visual relations. Through controlled experiments, we +demonstrate that visual-relation problems strain convolutional neural networks +(CNNs). The networks eventually break altogether when rote memorization becomes +impossible, as when intra-class variability exceeds network capacity. Motivated +by the comparable success of biological vision, we argue that feedback +mechanisms including attention and perceptual grouping may be the key +computational components underlying abstract visual reasoning.\ +" +Hyper-dimensional computing for a visual question-answering system that is trainable end-to-end," In this work we propose a system for visual question answering. Our +architecture is composed of two parts, the first part creates the logical +knowledge base given the image. The second part evaluates questions against the +knowledge base. Differently from previous work, the knowledge base is +represented using hyper-dimensional computing. This choice has the advantage +that all the operations in the system, namely creating the knowledge base and +evaluating the questions against it, are differentiable, thereby making the +system easily trainable in an end-to-end fashion. +" +The Inner Structure of Time-Dependent Signals," This paper shows how a time series of measurements of an evolving system can +be processed to create an inner time series that is unaffected by any +instantaneous invertible, possibly nonlinear transformation of the +measurements. An inner time series contains information that does not depend on +the nature of the sensors, which the observer chose to monitor the system. +Instead, it encodes information that is intrinsic to the evolution of the +observed system. Because of its sensor-independence, an inner time series may +produce fewer false negatives when it is used to detect events in the presence +of sensor drift. Furthermore, if the observed physical system is comprised of +non-interacting subsystems, its inner time series is separable; i.e., it +consists of a collection of time series, each one being the inner time series +of an isolated subsystem. Because of this property, an inner time series can be +used to detect a specific behavior of one of the independent subsystems without +using blind source separation to disentangle that subsystem from the others. +The method is illustrated by applying it to: 1) an analytic example; 2) the +audio waveform of one speaker; 3) video images from a moving camera; 4) +mixtures of audio waveforms of two speakers. +" +Classical properties of the leading eigenstates of quantum dissipative systems," By analyzing a paradigmatic example of the theory of dissipative systems -- +the classical and quantum dissipative standard map -- we are able to explain +the main features of the decay to the quantum equilibrium state. The classical +isoperiodic stable structures typically present in the parameter space of these +kind of systems play a fundamental role. In fact, we have found that the period +of stable structures that are near in this space determines the phase of the +leading eigenstates of the corresponding quantum superoperator. Moreover, the +eigenvectors show a strong localization on the corresponding periodic orbits +(limit cycles). We show that this sort of scarring phenomenon (an established +property of Hamiltonian and projectively open systems) is present in the +dissipative case and it is of extreme simplicity. +" +Towards Binary-Valued Gates for Robust LSTM Training," Long Short-Term Memory (LSTM) is one of the most widely used recurrent +structures in sequence modeling. It aims to use gates to control information +flow (e.g., whether to skip some information or not) in the recurrent +computations, although its practical implementation based on soft gates only +partially achieves this goal. In this paper, we propose a new way for LSTM +training, which pushes the output values of the gates towards 0 or 1. By doing +so, we can better control the information flow: the gates are mostly open or +closed, instead of in a middle state, which makes the results more +interpretable. Empirical studies show that (1) Although it seems that we +restrict the model capacity, there is no performance drop: we achieve better or +comparable performances due to its better generalization ability; (2) The +outputs of gates are not sensitive to their inputs: we can easily compress the +LSTM unit in multiple ways, e.g., low-rank approximation and low-precision +approximation. The compressed models are even better than the baseline models +without compression. +" +GUB Covers and Power-Indexed formulations for Wireless Network Design," We propose a pure 0-1 formulation for the wireless network design problem, +i.e. the problem of configuring a set of transmitters to provide service +coverage to a set of receivers. In contrast with classical mixed integer +formulations, where power emissions are represented by continuous variables, we +consider only a finite set of powers values. This has two major advantages: it +better fits the usual practice and eliminates the sources of numerical problems +which heavily affect continuous models. A crucial ingredient of our approach is +an effective basic formulation for the single knapsack problem representing the +coverage condition of a receiver. This formulation is based on the GUB cover +inequalities introduced by Wolsey (1990) and its core is an extension of the +exact formulation of the GUB knapsack polytope with two GUB constraints. This +special case corresponds to the very common practical situation where only one +major interferer is present. We assess the effectiveness of our formulation by +comprehensive computational results over realistic instances of two typical +technologies, namely WiMAX and DVB-T. +" +Orbital contributions to the electron g-factor in semiconductor nanowires," Recent experiments on Majorana fermions in semiconductor nanowires [Albrecht +et al., Nat. 531, 206 (2016)] revealed a surprisingly large electronic Landé +g-factor, several times larger than the bulk value - contrary to the +expectation that confinement reduces the g-factor. Here we assess the role of +orbital contributions to the electron g-factor in nanowires and quantum dots. +We show that an LS coupling in higher subbands leads to an enhancement of the +g-factor of an order of magnitude or more for small effective mass +semiconductors. We validate our theoretical finding with simulations of InAs +and InSb, showing that the effect persists even if cylindrical symmetry is +broken. A huge anisotropy of the enhanced g-factors under magnetic field +rotation allows for a straightforward experimental test of this theory. +" +Comprehensive routing strategy on multilayer networks," Designing an efficient routing strategy is of great importance to alleviate +traffic congestion in multilayer networks. In this work, we design an effective +routing strategy for multilayer networks by comprehensively considering the +roles of nodes' local structures in micro-level, as well as the macro-level +differences in transmission speeds between different layers. Both numerical and +analytical results indicate that our proposed routing strategy can reasonably +redistribute the traffic load of the low speed layer to the high speed layer, +and thus the traffic capacity of multilayer networks are significantly enhanced +compared with the monolayer low speed networks. There is an optimal combination +of macro- and micro-level control parameters at which can remarkably alleviate +the congestion and thus maximize the traffic capacity for a given multilayer +network. Moreover, we find that increasing the size and the average degree of +the high speed layer can enhance the traffic capacity of multilayer networks +more effectively. We finally verify that real-world network topology does not +invalidate the results. The theoretical predictions agree well with the +numerical simulations. +" +Long short-term memory and learning-to-learn in networks of spiking neurons," Recurrent networks of spiking neurons (RSNNs) underlie the astounding +computing and learning capabilities of the brain. But computing and learning +capabilities of RSNN models have remained poor, at least in comparison with +artificial neural networks (ANNs). We address two possible reasons for that. +One is that RSNNs in the brain are not randomly connected or designed according +to simple rules, and they do not start learning as a tabula rasa network. +Rather, RSNNs in the brain were optimized for their tasks through evolution, +development, and prior experience. Details of these optimization processes are +largely unknown. But their functional contribution can be approximated through +powerful optimization methods, such as backpropagation through time (BPTT). +A second major mismatch between RSNNs in the brain and models is that the +latter only show a small fraction of the dynamics of neurons and synapses in +the brain. We include neurons in our RSNN model that reproduce one prominent +dynamical process of biological neurons that takes place at the behaviourally +relevant time scale of seconds: neuronal adaptation. We denote these networks +as LSNNs because of their Long short-term memory. The inclusion of adapting +neurons drastically increases the computing and learning capability of RSNNs if +they are trained and configured by deep learning (BPTT combined with a rewiring +algorithm that optimizes the network architecture). In fact, the computational +performance of these RSNNs approaches for the first time that of LSTM networks. +In addition RSNNs with adapting neurons can acquire abstract knowledge from +prior learning in a Learning-to-Learn (L2L) scheme, and transfer that knowledge +in order to learn new but related tasks from very few examples. We demonstrate +this for supervised learning and reinforcement learning. +" +Detecting Molecular Rotational Dynamics Complementing the Low-Frequency Terahertz Vibrations in a Zirconium-Based Metal-Organic Framework," We show clear experimental evidence of co-operative terahertz (THz) dynamics +observed below 3 THz (~100 cm-1), for a low-symmetry Zr-based metal-organic +framework (MOF) structure, termed MIL-140A [ZrO(O2C-C6H4-CO2)]. Utilizing a +combination of high-resolution inelastic neutron scattering and synchrotron +radiation far-infrared spectroscopy, we measured low-energy vibrations +originating from the hindered rotations of organic linkers, whose energy +barriers and detailed dynamics have been elucidated via ab initio density +functional theory (DFT) calculations. For completeness, we obtained Raman +spectra and characterized the alterations to the complex pore architecture +caused by the THz rotations. We discovered an array of soft modes with +trampoline-like motions, which could potentially be the source of anomalous +mechanical phenomena, such as negative linear compressibility and negative +thermal expansion. Our results also demonstrate coordinated shear dynamics +(~2.5 THz), a mechanism which we have shown to destabilize MOF crystals, in the +exact crystallographic direction of the minimum shear modulus (Gmin). +" +A southern-sky total intensity source catalogue at 2.3 GHz from S-band Polarisation All-Sky Survey data," The S-band Polarisation All-Sky Survey (S-PASS) has observed the entire +southern sky using the 64-metre Parkes radio telescope at 2.3GHz with an +effective bandwidth of 184MHz. The surveyed sky area covers all declinations +$\delta\leq 0^\circ$. To analyse compact sources the survey data have been +re-processed to produce a set of 107 Stokes $I$ maps with 10.75arcmin +resolution and the large scale emission contribution filtered out. In this +paper we use these Stokes $I$ images to create a total intensity southern-sky +extragalactic source catalogue at 2.3GHz. The source catalogue contains 23,389 +sources and covers a sky area of 16,600deg$^2$, excluding the Galactic plane +for latitudes $|b|<10^\circ$. Approximately 8% of catalogued sources are +resolved. S-PASS source positions are typically accurate to within 35arcsec. At +a flux density of 225mJy the S-PASS source catalogue is more than 95% complete, +and $\sim$94% of S-PASS sources brighter than 500mJy beam$^{-1}$ have a +counterpart at lower frequencies. +" +Mechanomyography based closed-loop Functional Electrical Stimulation cycling system," Functional Electrical Stimulation (FES) systems are successful in restoring +motor function and supporting paralyzed users. Commercially available FES +products are open loop, meaning that the system is unable to adapt to changing +conditions with the user and their muscles which results in muscle fatigue and +poor stimulation protocols. This is because it is difficult to close the loop +between stimulation and monitoring of muscle contraction using adaptive +stimulation. FES causes electrical artefacts which make it challenging to +monitor muscle contractions with traditional methods such as electromyography +(EMG). We look to overcome this limitation by combining FES with novel +mechanomyographic (MMG) sensors to be able to monitor muscle activity during +stimulation in real time. To provide a meaningful task we built an FES cycling +rig with a software interface that enabled us to perform adaptive recording and +stimulation, and then combine this with sensors to record forces applied to the +pedals using force sensitive resistors (FSRs), crank angle position using a +magnetic incremental encoder and inputs from the user using switches and a +potentiometer. We illustrated this with a closed-loop stimulation algorithm +that used the inputs from the sensors to control the output of a programmable +RehaStim 1 FES stimulator (Hasomed) in real-time. This recumbent bicycle rig +was used as a testing platform for FES cycling. The algorithm was designed to +respond to a change in requested speed (RPM) from the user and change the +stimulation power (% of maximum current mA) until this speed was achieved and +then maintain it. +" +QT2S: A System for Monitoring Road Traffic via Fine Grounding of Tweets," Social media platforms provide continuous access to user generated content +that enables real-time monitoring of user behavior and of events. The +geographical dimension of such user behavior and events has recently caught a +lot of attention in several domains: mobility, humanitarian, or +infrastructural. While resolving the location of a user can be straightforward, +depending on the affordances of their device and/or of the application they are +using, in most cases, locating a user demands a larger effort, such as +exploiting textual features. On Twitter for instance, only 2% of all tweets are +geo-referenced. In this paper, we present a system for zoomed-in grounding +(below city level) for short messages (e.g., tweets). The system combines +different natural language processing and machine learning techniques to +increase the number of geo-grounded tweets, which is essential to many +applications such as disaster response and real-time traffic monitoring. +" +A Benchmark for Dose Finding Studies with Continuous Outcomes," An important tool to evaluate the performance of any design is an optimal +benchmark proposed by O'Quigley and others (2002, Biostatistics 3(1), 51-56) +that provides an upper bound on the performance of a design under a given +scenario. The original benchmark can be applied to dose finding studies with a +binary endpoint only. However, there is a growing interest in dose finding +studies involving continuous outcomes, but no benchmark for such studies has +been developed. We show that the original benchmark and its extension by Cheung +(2014, Biometrics 70(2), 389-397), when looked at from a different perspective, +can be generalised to various settings with several discrete and continuous +outcomes. We illustrate and compare the benchmark performance in the setting of +a Phase I clinical trial with continuous toxicity endpoint and in the setting +of a Phase I/II clinical trial with continuous efficacy outcome. We show that +the proposed benchmark provides an accurate upper bound for model-based dose +finding methods and serves as a powerful tool for evaluating designs. +" +Wavefront retrieval through random pupil plane phase probes: Gerchberg-Saxton approach," A pupil plane wavefront reconstruction procedure is proposed based on +analysis of a sequence of focal plane images corresponding to a sequence of +random pupil plane phase probes. The developed method provides the unique +nontrivial solution of wavefront retrieval problem and shows global convergence +to this solution demonstrated using a Gerchberg-Saxton implementation. The +method is general and can be used in any optical system that includes +deformable mirrors for active/adaptive wavefront correction. The presented +numerical simulation and lab experimental results show low noise sensitivity, +high reliability and robustness of the proposed approach for high quality +optical wavefront restoration. Laboratory experiments have shown $\lambda$/14 +rms accuracy in retrieval of a poked DM actuator fiducial pattern with spatial +resolution of 20-30$~\mu$m that is comparable with accuracy of direct +high-resolution interferometric measurements. +" +Uplink Non-Orthogonal Multiple Access for 5G Wireless Networks," Orthogonal Frequency Division Multiple Access (OFDMA) as well as other +orthogonal multiple access techniques fail to achieve the system capacity limit +in the uplink due to the exclusivity in resource allocation. This issue is more +prominent when fairness among the users is considered in the system. Current +Non-Orthogonal Multiple Access (NOMA) techniques introduce redundancy by +coding/spreading to facilitate the users' signals separation at the receiver, +which degrade the system spectral efficiency. Hence, in order to achieve higher +capacity, more efficient NOMA schemes need to be developed. In this paper, we +propose a NOMA scheme for uplink that removes the resource allocation +exclusivity and allows more than one user to share the same subcarrier without +any coding/spreading redundancy. Joint processing is implemented at the +receiver to detect the users' signals. However, to control the receiver +complexity, an upper limit on the number of users per subcarrier needs to be +imposed. In addition, a novel subcarrier and power allocation algorithm is +proposed for the new NOMA scheme that maximizes the users' sum-rate. The +link-level performance evaluation has shown that the proposed scheme achieves +bit error rate close to the single-user case. Numerical results show that the +proposed NOMA scheme can significantly improve the system performance in terms +of spectral efficiency and fairness comparing to OFDMA. +" +Epsilon-approximations and epsilon-nets," The use of random samples to approximate properties of geometric +configurations has been an influential idea for both combinatorial and +algorithmic purposes. This chapter considers two related +notions---$\epsilon$-approximations and $\epsilon$-nets---that capture the most +important quantitative properties that one would expect from a random sample +with respect to an underlying geometric configuration. +" +A continuous framework for fairness," Increasingly, discrimination by algorithms is perceived as a societal and +legal problem. As a response, a number of criteria for implementing algorithmic +fairness in machine learning have been developed in the literature. This paper +proposes the Continuous Fairness Algorithm (CFA$\theta$) which enables a +continuous interpolation between different fairness definitions. More +specifically, we make three main contributions to the existing literature. +First, our approach allows the decision maker to continuously vary between +concepts of individual and group fairness. As a consequence, the algorithm +enables the decision maker to adopt intermediate ""worldviews"" on the degree of +discrimination encoded in algorithmic processes, adding nuance to the extreme +cases of ""we're all equal"" (WAE) and ""what you see is what you get"" (WYSIWYG) +proposed so far in the literature. Second, we use optimal transport theory, and +specifically the concept of the barycenter, to maximize decision maker utility +under the chosen fairness constraints. Third, the algorithm is able to handle +cases of intersectionality, i.e., of multi-dimensional discrimination of +certain groups on grounds of several criteria. We discuss three main examples +(college admissions; credit application; insurance contracts) and map out the +policy implications of our approach. The explicit formalization of the +trade-off between individual and group fairness allows this post-processing +approach to be tailored to different situational contexts in which one or the +other fairness criterion may take precedence. +" +Counting Subwords Occurrences in Base-b Expansions," We count the number of distinct (scattered) subwords occurring in the base-b +expansion of the non-negative integers. More precisely, we consider the +sequence $(S_b(n))_{n\ge 0}$ counting the number of positive entries on each +row of a generalization of the Pascal triangle to binomial coefficients of +base-$b$ expansions. By using a convenient tree structure, we provide +recurrence relations for $(S_b(n))_{n\ge 0}$ leading to the $b$-regularity of +the latter sequence. Then we deduce the asymptotics of the summatory function +of the sequence $(S_b(n))_{n\ge 0}$. +" +Tensor Renormalization Group with Randomized Singular Value Decomposition," An algorithm of the tensor renormalization group is proposed based on a +randomized algorithm for singular value decomposition. Our algorithm is +applicable to a broad range of two-dimensional classical models. In the case of +a square lattice, its computational complexity and memory usage are +proportional to the fifth and the third power of the bond dimension, +respectively, whereas those of the conventional implementation are of the sixth +and the fourth power. The oversampling parameter larger than the bond dimension +is sufficient to reproduce the same result as full singular value decomposition +even at the critical point of the two-dimensional Ising model. +" +Cohomology of symplectic groups and Meyer's signature theorem," Meyer showed that the signature of a closed oriented surface bundle over a +surface is a multiple of $4$, and can be computed using an element of +$H^2(\mathsf{Sp}(2g, \mathbb{Z}),\mathbb{Z})$. Denoting by $1 \to \mathbb{Z} +\to \widetilde{\mathsf{Sp}(2g,\mathbb{Z})} \to \mathsf{Sp}(2g,\mathbb{Z}) \to +1$ the pullback of the universal cover of $\mathsf{ Sp}(2g,\mathbb{R})$, +Deligne proved that every finite index subgroup of $\widetilde{\mathsf {Sp}(2g, +\mathbb{Z})}$ contains $2\mathbb{Z}$. As a consequence, a class in the second +cohomology of any finite quotient of $\mathsf{Sp}(2g, \mathbb{Z})$ can at most +enable us to compute the signature of a surface bundle modulo $8$. We show that +this is in fact possible and investigate the smallest quotient of +$\mathsf{Sp}(2g, \mathbb{Z})$ that contains this information. This quotient +$\mathfrak{H}$ is a non-split extension of $\mathsf {Sp}(2g,2)$ by an +elementary abelian group of order $2^{2g+1}$. There is a central extension +$1\to \mathbb{Z}/2\to\tilde{\mathfrak{H}}\to\mathfrak{H}\to 1$, and +$\tilde{\mathfrak{H}}$ appears as a quotient of the metaplectic double cover +$\mathsf{Mp}(2g,\mathbb{Z})=\widetilde{\mathsf{Sp}(2g,\mathbb{Z})}/2\mathbb{Z}$. +It is an extension of $\mathsf{Sp}(2g,2)$ by an almost extraspecial group of +order $2^{2g+2}$, and has a faithful irreducible complex representation of +dimension $2^g$. Provided $g\ge 4$, $\widetilde{\mathfrak{H}}$ is the universal +central extension of $\mathfrak{H}$. Putting all this together, we provide a +recipe for computing the signature modulo $8$, and indicate some consequences. +" +The Newman--Shapiro problem," We give a negative answer to the Newman--Shapiro problem on weighted +approximation for entire functions formulated in 1966 and motivated by the +theory of operators on the Fock space. There exists a function in the Fock +space such that its exponential multiples do not approximate some entire +multiples in the space. Furthermore, we establish several positive results +under different restrictions on the function in question. +" +randUTV: A blocked randomized algorithm for computing a rank-revealing UTV factorization," This manuscript describes the randomized algorithm randUTV for computing a so +called UTV factorization efficiently. Given a matrix $A$, the algorithm +computes a factorization $A = UTV^{*}$, where $U$ and $V$ have orthonormal +columns, and $T$ is triangular (either upper or lower, whichever is preferred). +The algorithm randUTV is developed primarily to be a fast and easily +parallelized alternative to algorithms for computing the Singular Value +Decomposition (SVD). randUTV provides accuracy very close to that of the SVD +for problems such as low-rank approximation, solving ill-conditioned linear +systems, determining bases for various subspaces associated with the matrix, +etc. Moreover, randUTV produces highly accurate approximations to the singular +values of $A$. Unlike the SVD, the randomized algorithm proposed builds a UTV +factorization in an incremental, single-stage, and non-iterative way, making it +possible to halt the factorization process once a specified tolerance has been +met. Numerical experiments comparing the accuracy and speed of randUTV to the +SVD are presented. These experiments demonstrate that in comparison to column +pivoted QR, which is another factorization that is often used as a relatively +economic alternative to the SVD, randUTV compares favorably in terms of speed +while providing far higher accuracy. +" +"Continuity of nonlinear eigenvalues in $CD(K,\infty)$ spaces with respect to measured Gromov-Hausdorff convergence"," In this note we prove in the nonlinear setting of $CD(K,\infty)$ spaces the +stability of the Krasnoselskii spectrum of the Laplace operator $-\Delta$ under +measured Gromov-Hausdorff convergence, under an additional compactness +assumption satisfied, for instance, by sequences of $CD^*(K,N)$ metric measure +spaces with uniformly bounded diameter. Additionally, we show that every +element $\lambda$ in the Krasnoselskii spectrum is indeed an eigenvalue, namely +there exists a nontrivial $u$ satisfying the eigenvalue equation $- \Delta u = +\lambda u$. +" +Modeling WiFi Traffic for White Space Prediction in Wireless Sensor Networks," Cross Technology Interference (CTI) is a prevalent phenomenon in the 2.4 GHz +unlicensed spectrum causing packet losses and increased channel contention. In +particular, WiFi interference is a severe problem for low-power wireless +networks as its presence causes a significant degradation of the overall +performance. In this paper, we propose a proactive approach based on WiFi +interference modeling for accurately predicting transmission opportunities for +low-power wireless networks. We leverage statistical analysis of real-world +WiFi traces to learn aggregated traffic characteristics in terms of +Inter-Arrival Time (IAT) that, once captured into a specific 2nd order Markov +Modulated Poisson Process (MMPP(2)) model, enable accurate estimation of +interference. We further use a hidden Markov model (HMM) for channel occupancy +prediction. We evaluated the performance of i) the MMPP(2) traffic model w.r.t. +real-world traces and an existing Pareto model for accurately characterizing +the WiFi traffic and, ii) compared the HMM based white space prediction to +random channel access. We report encouraging results for using interference +modeling for white space prediction. +" +Dyson models under renormalization and in weak fields," We consider one-dimensional long-range spin models (usually called Dyson +models), consisting of Ising ferromagnets with slowly decaying long-range pair +potentials of the form $\frac{1}{|i-j|^{\alpha}}$ mainly focusing on the range +of slow decays $1 < \alpha \leq 2$. We describe two recent results, one about +renormalization and one about the effect of external fields at low temperature. +The first result states that a decimated long-range Gibbs measure in one +dimension becomes non-Gibbsian, in the same vein as comparable results in +higher dimensions for short-range models. The second result addresses the +behaviour of such models under inhomogeneous fields, in particular external +fields which decay to zero polynomially as $(|i|+1)^{- \gamma}$. We study how +the critical decay power of the field, $\gamma$, for which the phase transition +persists and the decay power $\alpha$ of the Dyson model compare, extending +recent results for short-range models on lattices and on trees. We also briefly +point out some analogies between these results. +" +Coupling geometry on binary bipartite networks: hypotheses testing on pattern geometry and nestedness," Upon a matrix representation of a binary bipartite network, via the +permutation invariance, a coupling geometry is computed to approximate the +minimum energy macrostate of a network's system. Such a macrostate is supposed +to constitute the intrinsic structures of the system, so that the coupling +geometry should be taken as information contents, or even the nonparametric +minimum sufficient statistics of the network data. Then pertinent null and +alternative hypotheses, such as nestedness, are to be formulated according to +the macrostate. That is, any efficient testing statistic needs to be a function +of this coupling geometry. These conceptual architectures and mechanisms are by +and large still missing in community ecology literature, and rendered +misconceptions prevalent in this research area. Here the algorithmically +computed coupling geometry is shown consisting of deterministic multiscale +block patterns, which are framed by two marginal ultrametric trees on row and +column axes, and stochastic uniform randomness within each block found on the +finest scale. Functionally a series of increasingly larger ensembles of matrix +mimicries is derived by conforming to the multiscale block configurations. Here +matrix mimicking is meant to be subject to constraints of row and column sums +sequences. Based on such a series of ensembles, a profile of distributions +becomes a natural device for checking the validity of testing statistics or +structural indexes. An energy based index is used for testing whether network +data indeed contains structural geometry. A new version block-based nestedness +index is also proposed. Its validity is checked and compared with the existing +ones. A computing paradigm, called Data Mechanics, and its application on one +real data network are illustrated throughout the developments and discussions +in this paper. +" +"MUSE-inspired view of the quasar Q2059-360, its Lyman alpha blob, and its neighborhood"," The radio-quiet quasar Q2059-360 at redshift $z=3.08$ is known to be close to +a small Lyman $\alpha$ blob (LAB) and to be absorbed by a proximate damped +Ly$\alpha$ (PDLA) system. +Here, we present the Multi Unit Spectroscopic Explorer (MUSE) integral field +spectroscopy follow-up of this quasi-stellar object (QSO). Our primary goal is +to characterize this LAB in detail by mapping it both spatially and spectrally +using the Ly$\alpha$ line, and by looking for high-ionization lines to +constrain the emission mechanism. +Combining the high sensitivity of the MUSE integral field spectrograph +mounted on the Yepun telescope at ESO-VLT with the natural coronagraph provided +by the PDLA, we map the LAB down to the QSO position, after robust subtraction +of QSO light in the spectral domain. +In addition to confirming earlier results for the small bright component of +the LAB, we unveil a faint filamentary emission protruding to the south over +about 80 pkpc (physical kpc); this results in a total size of about 120 pkpc. +We derive the velocity field of the LAB (assuming no transfer effects) and map +the Ly$\alpha$ line width. Upper limits are set to the flux of the N V $\lambda +1238-1242$, C IV $\lambda 1548-1551$, He II $\lambda 1640$, and C III] $\lambda +1548-1551$ lines. We have discovered two probable Ly$\alpha$ emitters at the +same redshift as the LAB and at projected distances of 265 kpc and 207 kpc from +the QSO; their Ly$\alpha$ luminosities might well be enhanced by the QSO +radiation. We also find an emission line galaxy at $z=0.33$ near the line of +sight to the QSO. +This LAB shares the same general characteristics as the 17 others surrounding +radio-quiet QSOs presented previously. However, there are indications that it +may be centered on the PDLA galaxy rather than on the QSO. +" +Community Detection on Euclidean Random Graphs," We study the problem of community detection (CD) on Euclidean random +geometric graphs where each vertex has two latent variables: a binary community +label and a $\mathbb{R}^d$ valued location label which forms the support of a +Poisson point process of intensity $\lambda$. A random graph is then drawn with +edge probabilities dependent on both the community and location labels. In +contrast to the stochastic block model (SBM) that has no location labels, the +resulting random graph contains many more short loops due to the geometric +embedding. We consider the recovery of the community labels, partial and exact, +using the random graph and the location labels. We establish phase transitions +for both sparse and logarithmic degree regimes, and provide bounds on the +location of the thresholds, conjectured to be tight in the case of exact +recovery. We also show that the threshold of the distinguishability problem, +i.e., the testing between our model and the null model without community labels +exhibits no phase-transition and in particular, does not match the weak +recovery threshold (in contrast to the SBM). +" +Load balancing with heterogeneous schedulers," Load balancing is a common approach in web server farms or inventory routing +problems. An important issue in such systems is to determine the server to +which an incoming request should be routed to optimize a given performance +criteria. In this paper, we assume the server's scheduling disciplines to be +heterogeneous. More precisely, a server implements a scheduling discipline +which belongs to the class of limited processor sharing (LPS-$d$) scheduling +disciplines. Under LPS-$d$, up to $d$ jobs can be served simultaneously, and +hence, includes as special cases First Come First Served ($d=1$) and Processor +Sharing ($d=\infty$). +In order to obtain efficient heuristics, we model the above load-balancing +framework as a multi-armed restless bandit problem. Using the relaxation +technique, as first developed in the seminal work of Whittle, we derive +Whittle's index policy for general cost functions and obtain a closed-form +expression for Whittle's index in terms of the steady-state distribution. +Through numerical computations, we investigate the performance of Whittle's +index with two different performance criteria: linear cost criterion and a cost +criterion that depends on the first and second moment of the throughput. Our +results show that \emph{(i)} the structure of Whittle's index policy can +strongly depend on the scheduling discipline implemented in the server, i.e., +on $d$, and that \emph{(ii)} Whittle's index policy significantly outperforms +standard dispatching rules such as Join the Shortest Queue (JSQ), Join the +Shortest Expected Workload (JSEW), and Random Server allocation (RSA). +" +An Information Theoretic Framework for Active De-anonymization in Social Networks Based on Group Memberships," In this paper, a new mathematical formulation for the problem of +de-anonymizing social network users by actively querying their membership in +social network groups is introduced. In this formulation, the attacker has +access to a noisy observation of the group membership of each user in the +social network. When an unidentified victim visits a malicious website, the +attacker uses browser history sniffing to make queries regarding the victim's +social media activity. Particularly, it can make polar queries regarding the +victim's group memberships and the victim's identity. The attacker receives +noisy responses to her queries. The goal is to de-anonymize the victim with the +minimum number of queries. Starting with a rigorous mathematical model for this +active de-anonymization problem, an upper bound on the attacker's expected +query cost is derived, and new attack algorithms are proposed which achieve +this bound. These algorithms vary in computational cost and performance. The +results suggest that prior heuristic approaches to this problem provide +sub-optimal solutions. +" +Geobiodynamics and Roegenian Economic Systems," This mathematical essay brings together ideas from Economics, Geobiodynamics +and Thermodynamics. Its purpose is to obtain real models of complex +evolutionary systems. More specifically, the essay defines Roegenian Economy +and links Geobiodynamics and Roegenian Economy. In this context, we discuss the +isomorphism between the concepts and techniques of Thermodynamics and +Economics. Then we describe a Roegenian economic system like a Carnot group. +After we analyse the phase equilibrium for two heterogeneous economic systems. +The European Union Economics appears like Cartesian product of Roegenian +economic systems and its Balance is analysed in details. A Section at the end +describes the ""economic black holes"" as small parts of a a global economic +system in which national income is so great that it causes others poor +enrichment. These ideas can be used to improve our knowledge and understanding +of the nature of development and evolution of thermodynamic-economic systems. +" +The effects of the overshooting of the convective core on main-sequence turnoffs of young- and intermediate-age star clusters," Recent investigations have shown that the extended main-sequence turnoffs +(eMSTOs) are a common feature of intermediate-age star clusters in the +Magellanic Clouds. The eMSTOs are also found in the color-magnitude diagram +(CMD) of young-age star clusters. The origin of the eMSTOs is still an open +question. Moreover, asteroseismology shows that the value of the overshooting +parameter $\delta_{\rm ov}$ of the convective core is not fixed for the stars +with an approximatelly equal mass. Thus the MSTO of star clusters may be +affected by the overshooting of the convective core (OVCC). We calculated the +effects of the OVCC with different $\delta_{\rm ov}$ on the MSTO of young- and +intermediate-age star clusters. \textbf{If $\delta_{\rm ov}$ varies between +stars in a cluster,} the observed eMSTOs of young- and intermediate-age star +clusters can be explained well by the effects. The equivalent age spreads of +MSTO caused by the OVCC are related to the age of star clusters and are in good +agreement with observed results of many clusters. Moreover, the observed eMSTOs +of NGC 1856 are reproduced by the coeval populations with different +$\delta_{\rm ov}$. The eMSTOs of star clusters may be relevant to the effects +of the OVCC. The effects of the OVCC \textbf{are similar to that of rotation in +some respects. But the effects cannot result in a significant split of main +sequence of young star clusters at $m_{U}\lesssim 21$.} The presence of a rapid +rotation can make the split of main sequence of young star clusters more +significant. +" +First eigenvalue estimates of Dirichlet-to-Neumann operators on graphs," Following Escobar [Esc97] and Jammes [Jam15], we introduce two types of +isoperimetric constants and give lower bound estimates for the first nontrivial +eigenvalues of Dirichlet-to-Neumann operators on finite graphs with boundary +respectively. +" +Air-burst Generated Tsunamis," This paper examines the questions of whether smaller asteroids that burst in +the air over water can generate tsunamis that could pose a threat to distant +locations. Such air burst-generated tsunamis are qualitatively different than +the more frequently studied earthquake-generated tsunamis, and differ as well +from impact asteroids. Numerical simulations are presented using the shallow +water equations in several settings, demonstrating very little tsunami threat +from this scenario. A model problem with an explicit solution that demonstrates +and explains the same phenomena found in the computations is analyzed. We +discuss the question of whether compressibility and dispersion are important +effects that should be included, and show results from a more sophisticated +model problem using the linearized Euler equations that begins to addresses +this. +" +A new design principle of robust onion-like networks self-organized in growth," Today's economy, production activity, and our life are sustained by social +and technological network infrastructures, while new threats of network attacks +by destructing loops have been found recently in network science. We inversely +take into account the weakness, and propose a new design principle for +incrementally growing robust networks. The networks are self-organized by +enhancing interwoven long loops. In particular, we consider the range-limited +approximation of linking by intermediations in a few hops, and show the strong +robustness in the growth without degrading efficiency of paths. Moreover, we +demonstrate that the tolerance of connectivity is reformable even from +extremely vulnerable real networks according to our proposed growing process +with some investment. These results may indicate a prospective direction to the +future growth of our network infrastructures. +" +Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization," Stochastic Gradient Descent (SGD) has played a central role in machine +learning. However, it requires a carefully hand-picked stepsize for fast +convergence, which is notoriously tedious and time-consuming to tune. Over the +last several years, a plethora of adaptive gradient-based algorithms have +emerged to ameliorate this problem. They have proved efficient in reducing the +labor of tuning in practice, but many of them lack theoretic guarantees even in +the convex setting. In this paper, we propose new surrogate losses to cast the +problem of learning the optimal stepsizes for the stochastic optimization of a +non-convex smooth objective function onto an online convex optimization +problem. This allows the use of no-regret online algorithms to compute optimal +stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned +stepsizes that guarantees convergence rates that are automatically adaptive to +the level of noise. +" +On Categorical Time Series Models With Covariates," We study the problem of stationarity and ergodicity for autoregressive +multinomial logistic time series models which possibly include a latent process +and are defined by a GARCH-type recursive equation. We improve considerably +upon the existing results related to stationarity and ergodicity conditions of +such models. Proofs are based on theory developed for chains with complete +connections. This approach is based on a useful coupling technique which is +utilized for studying ergodicity of more general finite-state stochastic +processes. Such processes generalize finite-state Markov chains by assuming +infinite order models of past values. For finite order Markov chains, we also +discuss ergodicity properties when some strongly exogenous covariates are +considered in the dynamics of the process. +" +Layered Coding for Energy Harvesting Communication Without CSIT," Due to stringent constraints on resources, it may be infeasible to acquire +the current channel state information at the transmitter in energy harvesting +communication systems. In this paper, we optimize an energy harvesting +transmitter, communicating over a slow fading channel, using layered coding. +The transmitter has access to the channel statistics, but does not know the +exact channel state. In layered coding, the codewords are first designed for +each of the channel states at different rates, and then the codewords are +either time-multiplexed or superimposed before the transmission, leading to two +transmission strategies. The receiver then decodes the information adaptively +based on the realized channel state. The transmitter is equipped with a +finite-capacity battery having non-zero internal resistance. In each of the +transmission strategies, we first formulate and study an average rate +maximization problem with non-causal knowledge of the harvested power +variations. Further, assuming statistical knowledge and causal information of +the harvested power variations, we propose a sub-optimal algorithm, and compare +with the stochastic dynamic programming based solution and a greedy policy. +" +The automorphisms of Petit's algebras," Let $\sigma$ be an automorphism of a field $K$ with fixed field $F$. We study +the automorphisms of nonassociative unital algebras which are canonical +generalizations of the associative quotient algebras $K[t;\sigma]/fK[t;\sigma]$ +obtained when the twisted polynomial $f\in K[t;\sigma]$ is invariant, and were +first defined by Petit. We compute all their automorphisms if $\sigma$ commutes +with all automorphisms in ${\rm Aut}_F(K)$ and $n\geq m-1$, where $n$ is the +order of $\sigma$ and $m$ the degree of $f$,and obtain partial results for +$n 31$ K), the transport is +best explained by variable range hopping (VRH) model. Large magnitude of +resistivity in CPP mode indicates strong structural anisotropy. Seebeck +coefficient as a function of temperature measured in the range $90 - 300$ K, +also agrees well with the VRH model. The room temperature Seebeck coefficient +is found to be $139$ $\mu$V/K. VRH fittings of the resistivity and Seebeck +coefficient data indicate high degree of localization. +" +Ball in double hoop: demonstration model for numerical optimal control," Ball and hoop system is a well-known model for the education of linear +control systems. In this paper, we have a look at this system from another +perspective and show that it is also suitable for demonstration of more +advanced control techniques. In contrast to the standard use, we describe the +dynamics of the system at full length; in addition to the mode where the ball +rolls on the (outer) hoop we also consider the mode where the ball drops out of +the hoop and enters a free-fall mode. Furthermore, we add another (inner) hoop +in the center upon which the ball can land from the free-fall mode. This +constitutes another mode of the hybrid description of the system. We present +two challenging tasks for this model and show how they can be solved by +trajectory generation and stabilization. We also describe how such a model can +be built and experimentally verify the validity of our approach solving the +proposed tasks. +" +Forecasting market states," We propose a novel methodology to define, analyse and forecast market states. +In our approach market states are identified by a reference sparse precision +matrix and a vector of expectation values. In our procedure each multivariate +observation is associated to a given market state accordingly to a penalized +likelihood maximization. The procedure is made computationally very efficient +and can be used with a large number of assets. We demonstrate that this +procedure successfully classifies different states of the markets in an +unsupervised manner. In particular, we describe an experiment with one hundred +log-returns and two states in which the methodology automatically associates +one state to periods with average positive returns and the other state to +periods with average negative returns, therefore discovering spontaneously the +common classification of `bull' and `bear' markets. In another experiment, with +again one hundred log-returns and two states, we demonstrate that this +procedure can be efficiently used to forecast off-sample future market states +with significant prediction accuracy. This methodology opens the way to a range +of applications in risk management and trading strategies where the correlation +structure plays a central role. +" +Planetary Ring Dynamics -- The Streamline Formalism -- 2. Theory of Narrow Rings and Sharp Edges," The present material covers the features of large scale ring dynamics in +perturbed flows that were not addressed in part 1 (astro-ph/1606.00759); this +includes an extensive coverage of all kinds of ring modes dynamics (except +density waves which have been covered in part 1), the origin of ring +eccentricities and mode amplitudes, and the issue of ring/gap confinement. This +still leaves aside a number of important dynamical issues relating to the ring +small scale structure, most notably the dynamics of self-gravitational wakes, +of local viscous overstabilities and of ballistic transport processes. +As this material is designed to be self-contained, there is some 30% overlap +with part 1. This work constitutes a preprint of Chapter 11 of the forthcoming +Cambridge University book on rings (Planetary Ring Systems, Matt Tiscareno and +Carl Murray, eds). +" +Parallel Streaming Wasserstein Barycenters," Efficiently aggregating data from different sources is a challenging problem, +particularly when samples from each source are distributed differently. These +differences can be inherent to the inference task or present for other reasons: +sensors in a sensor network may be placed far apart, affecting their individual +measurements. Conversely, it is computationally advantageous to split Bayesian +inference tasks across subsets of data, but data need not be identically +distributed across subsets. One principled way to fuse probability +distributions is via the lens of optimal transport: the Wasserstein barycenter +is a single distribution that summarizes a collection of input measures while +respecting their geometry. However, computing the barycenter scales poorly and +requires discretization of all input distributions and the barycenter itself. +Improving on this situation, we present a scalable, communication-efficient, +parallel algorithm for computing the Wasserstein barycenter of arbitrary +distributions. Our algorithm can operate directly on continuous input +distributions and is optimized for streaming data. Our method is even robust to +nonstationary input distributions and produces a barycenter estimate that +tracks the input measures over time. The algorithm is semi-discrete, needing to +discretize only the barycenter estimate. To the best of our knowledge, we also +provide the first bounds on the quality of the approximate barycenter as the +discretization becomes finer. Finally, we demonstrate the practical +effectiveness of our method, both in tracking moving distributions on a sphere, +as well as in a large-scale Bayesian inference task. +" +Lecar's visual comparison method to assess the randomness of Bode's law: an answer," The usual main objection against any attempt in finding a physical cause for +the planet distance distribution is based on the assumption that similar +distance distribution could be obtained by sequences of random numbers. This +assumption was stated by Lecar in an old paper (1973). We show here how this +assumption is incorrect and how his visual comparison method is inappropriate. +" +GoDP: Globally optimized dual pathway system for facial landmark localization in-the-wild," Facial landmark localization is a fundamental module for pose-invariant face +recognition. The most common approach for facial landmark detection is cascaded +regression, which is composed of two steps: feature extraction and facial shape +regression. Recent methods employ deep convolutional networks to extract robust +features for each step, while the whole system could be regarded as a deep +cascaded regression architecture. In this work, instead of employing a deep +regression network, a Globally Optimized Dual-Pathway (GoDP) deep architecture +is proposed to identify the target pixels through solving a cascaded pixel +labeling problem without resorting to high-level inference models or complex +stacked architecture. The proposed end-to-end system relies on distance-aware +softmax functions and dual-pathway proposal-refinement architecture. Results +show that it outperforms the state-of-the-art cascaded regression-based methods +on multiple in-the-wild face alignment databases. The model achieves 1.84 +normalized mean error (NME) on the AFLW database, which outperforms 3DDFA by +61.8%. Experiments on face identification demonstrate that GoDP, coupled with +DPM-headhunter, is able to improve rank-1 identification rate by 44.2% compared +to Dlib toolbox on a challenging database. +" +"A function with support of finite measure and ""small"" spectrum"," We construct a function on the real line supported on a set of finite measure +whose spectrum has density zero. +" +Cell growth rate dictates the onset of glass to fluid-like transition and long time super-diffusion in an evolving cell colony," Collective migration dominates many phenomena, from cell movement in living +systems to abiotic self-propelling particles. Focusing on the early stages of +tumor evolution, we enunciate the principles involved in cell dynamics and +highlight their implications in understanding similar behavior in seemingly +unrelated soft glassy materials and possibly chemokine-induced migration of +CD8$^{+}$ T cells. We performed simulations of tumor invasion using a minimal +three dimensional model, accounting for cell elasticity and adhesive cell-cell +interactions as well as cell birth and death to establish that cell growth +rate-dependent tumor expansion results in the emergence of distinct topological +niches. Cells at the periphery move with higher velocity perpendicular to the +tumor boundary, while motion of interior cells is slower and isotropic. The +mean square displacement, $\Delta(t)$, of cells exhibits glassy behavior at +times comparable to the cell cycle time, while exhibiting super-diffusive +behavior, $\Delta (t) \approx t^{\alpha}$ ($\alpha > 1$), at longer times. We +derive the value of $\alpha \approx 1.33$ using a field theoretic approach +based on stochastic quantization. In the process we establish the universality +of super-diffusion in a class of seemingly unrelated non-equilibrium systems. +Super diffusion at long times arises only if there is an imbalance between cell +birth and death rates. Our findings for the collective migration, which also +suggests that tumor evolution occurs in a polarized manner, are in quantitative +agreement with {\it in vitro} experiments. Although set in the context of tumor +invasion the findings should also hold in describing collective motion in +growing cells and in active systems where creation and annihilation of +particles play a role. +" +Distributed Average Tracking for Lipschitz-Type Nonlinear Dynamical Systems," In this paper, a distributed average tracking problem is studied for +Lipschitz-type nonlinear dynamical systems. The objective is to design +distributed average tracking algorithms for locally interactive agents to track +the average of multiple reference signals. Here, in both the agents' and the +reference signals' dynamics, there is a nonlinear term satisfying the +Lipschitz-type condition. Three types of distributed average tracking +algorithms are designed. First, based on state-dependent-gain designing +approaches, a robust distributed average tracking algorithm is developed to +solve distributed average tracking problems without requiring the same initial +condition. Second, by using a gain adaption scheme, an adaptive distributed +average tracking algorithm is proposed in this paper to remove the requirement +that the Lipschitz constant is known for agents. Third, to reduce chattering +and make the algorithms easier to implement, a continuous distributed average +tracking algorithm based on a time-varying boundary layer is further designed +as a continuous approximation of the previous discontinuous distributed average +tracking algorithms. +" +On the Möbius Function and Topology of General Pattern Posets," We introduce a formal definition of a pattern poset which encompasses several +previously studied posets in the literature. Using this definition we present +some general results on the Möbius function and topology of such pattern +posets. We prove our results using a poset fibration based on the embeddings of +the poset, where embeddings are representations of occurrences. We show that +the Möbius function of these posets is intrinsically linked to the number of +embeddings, and in particular to so called normal embeddings. We present +results on when topological properties such as Cohen-Macaulayness and +shellability are preserved by this fibration. Furthermore, we apply these +results to some pattern posets and derive alternative proofs of existing +results, such as Björner's results on subword order. +" +A Potapov-type approach to a truncated matricial Stieltjes-type power moment problem," The paper gives a parametrization of the solution set of a matricial +Stieltjes-type truncated power moment problem in the non-degenerate and +degenerate cases. The key role plays the solution of the corresponding system +of Potapov's fundamental matrix inequalities. +" +The scaling limit of the KPZ equation in space dimension 3 and higher," We study in the present article the Kardar-Parisi-Zhang (KPZ) equation $$ +\partial_t h(t,x)=\nu\Delta h(t,x)+\lambda |\nabla h(t,x)|^2 +\sqrt{D}\, +\eta(t,x), \qquad (t,x)\in\mathbb{R}_+\times\mathbb{R}^d $$ in $d\ge 3$ +dimensions in the perturbative regime, i.e. for $\lambda>0$ small enough and a +smooth, bounded, integrable initial condition $h_0=h(t=0,\cdot)$. The forcing +term $\eta$ in the right-hand side is a regularized space-time white noise. The +exponential of $h$ -- its so-called Cole-Hopf transform -- is known to satisfy +a linear PDE with multiplicative noise. We prove a large-scale diffusive limit +for the solution, in particular a time-integrated heat-kernel behavior for the +covariance in a parabolic scaling. +The proof is based on a rigorous implementation of K. Wilson's +renormalization group scheme. A double cluster/momentum-decoupling expansion +allows for perturbative estimates of the bare resolvent of the Cole-Hopf linear +PDE in the small-field region where the noise is not too large, following the +broad lines of Iagolnitzer-Magnen. Standard large deviation estimates for +$\eta$ make it possible to extend the above estimates to the large-field +region. Finally, we show, by resumming all the by-products of the expansion, +that the solution $h$ may be written in the large-scale limit (after a suitable +Galilei transformation) as a small perturbation of the solution of the +underlying linear Edwards-Wilkinson model ($\lambda=0$) with renormalized +coefficients $\nu_{eff}=\nu+O(\lambda^2),D_{eff}=D+O(\lambda^2)$. +" +Beyond Free Riding: Quality of Indicators for Assessing Participation in Information Sharing for Threat Intelligence," Threat intelligence sharing has become a growing concept, whereby entities +can exchange patterns of threats with each other, in the form of indicators, to +a community of trust for threat analysis and incident response. However, +sharing threat-related information have posed various risks to an organization +that pertains to its security, privacy, and competitiveness. Given the +coinciding benefits and risks of threat information sharing, some entities have +adopted an elusive behavior of ""free-riding"" so that they can acquire the +benefits of sharing without contributing much to the community. So far, +understanding the effectiveness of sharing has been viewed from the perspective +of the amount of information exchanged as opposed to its quality. In this +paper, we introduce the notion of quality of indicators (\qoi) for the +assessment of the level of contribution by participants in information sharing +for threat intelligence. We exemplify this notion through various metrics, +including correctness, relevance, utility, and uniqueness of indicators. In +order to realize the notion of \qoi, we conducted an empirical study and taken +a benchmark approach to define quality metrics, then we obtained a reference +dataset and utilized tools from the machine learning literature for quality +assessment. We compared these results against a model that only considers the +volume of information as a metric for contribution, and unveiled various +interesting observations, including the ability to spot low quality +contributions that are synonym to free riding in threat information sharing. +" +On the Design of LQR Kernels for Efficient Controller Learning," Finding optimal feedback controllers for nonlinear dynamic systems from data +is hard. Recently, Bayesian optimization (BO) has been proposed as a powerful +framework for direct controller tuning from experimental trials. For selecting +the next query point and finding the global optimum, BO relies on a +probabilistic description of the latent objective function, typically a +Gaussian process (GP). As is shown herein, GPs with a common kernel choice can, +however, lead to poor learning outcomes on standard quadratic control problems. +For a first-order system, we construct two kernels that specifically leverage +the structure of the well-known Linear Quadratic Regulator (LQR), yet retain +the flexibility of Bayesian nonparametric learning. Simulations of uncertain +linear and nonlinear systems demonstrate that the LQR kernels yield superior +learning performance. +" +How does propaganda influence the opinion dynamics of a population ?," The evolution of opinions in a population of individuals who constantly +interact with a common source of user-generated content (i.e. the internet) and +are also subject to propaganda is analyzed using computer simulations. The +model is based on the bounded confidence approach. In the absence of +propaganda, computer simulations show that the online population as a whole is +either fragmented, polarized or in perfect harmony on a certain issue or +ideology depending on the uncertainty of individuals in accepting opinions not +closer to theirs. On applying the model to simulate radicalization, a +proportion of the online population, subject to extremist propaganda radicalize +depending on their pre-conceived opinions and opinion uncertainty. It is found +that an optimal counter propaganda that prevents radicalization is not +necessarily centrist. +" +Explicit Solution for Constrained Stochastic Linear-Quadratic Control with Multiplicative Noise," We study in this paper a class of constrained linear-quadratic (LQ) optimal +control problem formulations for the scalar-state stochastic system with +multiplicative noise, which has various applications, especially in the +financial risk management. The linear constraint on both the control and state +variables considered in our model destroys the elegant structure of the +conventional LQ formulation and has blocked the derivation of an explicit +control policy so far in the literature. We successfully derive in this paper +the analytical control policy for such a class of problems by utilizing the +state separation property induced from its structure. We reveal that the +optimal control policy is a piece-wise affine function of the state and can be +computed off-line efficiently by solving two coupled Riccati equations. Under +some mild conditions, we also obtain the stationary control policy for infinite +time horizon. We demonstrate the implementation of our method via some +illustrative examples and show how to calibrate our model to solve dynamic +constrained portfolio optimization problems. +" +Viewpoint Selection for Photographing Architectures," This paper studies the problem of how to choose good viewpoints for taking +photographs of architectures. We achieve this by learning from professional +photographs of world famous landmarks that are available on the Internet. +Unlike previous efforts devoted to photo quality assessment which mainly rely +on 2D image features, we show in this paper combining 2D image features +extracted from images with 3D geometric features computed on the 3D models can +result in more reliable evaluation of viewpoint quality. Specifically, we +collect a set of photographs for each of 15 world famous architectures as well +as their 3D models from the Internet. Viewpoint recovery for images is carried +out through an image-model registration process, after which a newly proposed +viewpoint clustering strategy is exploited to validate users' viewpoint +preferences when photographing landmarks. Finally, we extract a number of 2D +and 3D features for each image based on multiple visual and geometric cues and +perform viewpoint recommendation by learning from both 2D and 3D features using +a specifically designed SVM-2K multi-view learner, achieving superior +performance over using solely 2D or 3D features. We show the effectiveness of +the proposed approach through extensive experiments. The experiments also +demonstrate that our system can be used to recommend viewpoints for rendering +textured 3D models of buildings for the use of architectural design, in +addition to viewpoint evaluation of photographs and recommendation of +viewpoints for photographing architectures in practice. +" +Geophysical tests for habitability in ice-covered ocean worlds," Geophysical measurements can reveal the structure of icy ocean worlds and +cycling of volatiles. The associated density, temperature, sound speed, and +electrical conductivity of such worlds thus characterizes their habitability. +To explore the variability and correlation of these parameters, and to provide +tools for planning and data analyses, we develop 1-D calculations of internal +structure, which use available constraints on the thermodynamics of aqueous +MgSO$_4$, NaCl (as seawater), and NH$_3$, water ices, and silicate content. +Limits in available thermodynamic data narrow the parameter space that can be +explored: insufficient coverage in pressure, temperature, and composition for +end-member salinities of MgSO$_4$ and NaCl, and for relevant water ices; and a +dearth of suitable data for aqueous mixtures of Na-Mg-Cl-SO$_4$-NH$_3$. For +Europa, ocean compositions that are oxidized and dominated by MgSO$_4$, vs +reduced (NaCl), illustrate these gaps, but also show the potential for +diagnostic and measurable combinations of geophysical parameters. The +low-density rocky core of Enceladus may comprise hydrated minerals, or anydrous +minerals with high porosity comparable to Earth's upper mantle. Titan's ocean +must be dense, but not necessarily saline, as previously noted, and may have +little or no high-pressure ice at its base. Ganymede's silicious interior is +deepest among all known ocean worlds, and may contain multiple phases of +high-pressure ice, which will become buoyant if the ocean is sufficiently +salty. Callisto's likely near-eutectic ocean cannot be adequately modeled using +available data. Callisto may also lack high-pressure ices, but this cannot be +confirmed due to uncertainty in its moment of inertia. +" +A fast speed planning algorithm for robotic manipulators," We consider the speed planning problem for a robotic manipulator. In +particular, we present an algorithm for finding the time-optimal speed law +along an assigned path that satisfies velocity and acceleration constraints and +respects the maximum forces and torques allowed by the actuators. The addressed +optimization problem is a finite dimensional reformulation of the +continuous-time speed optimization problem, obtained by discretizing the speed +profile with N points. The proposed algorithm has linear complexity with +respect to N and to the number of degrees of freedom. Such complexity is the +best possible for this problem. Numerical tests show that the proposed +algorithm is significantly faster than algorithms already existing in +literature. +" +Linearly convergent stochastic heavy ball method for minimizing generalization error," In this work we establish the first linear convergence result for the +stochastic heavy ball method. The method performs SGD steps with a fixed +stepsize, amended by a heavy ball momentum term. In the analysis, we focus on +minimizing the expected loss and not on finite-sum minimization, which is +typically a much harder problem. While in the analysis we constrain ourselves +to quadratic loss, the overall objective is not necessarily strongly convex. +" +Applying the Spacecraft with a Solar Sail to Form the Climate on a Mars Base," This article is devoted to research the application of the spacecraft with a +solar sail to support the certain climatic conditions in an area of the Mars +surface. Authors propose principles of functioning of the spacecraft, intended +to create a light and thermal light spot in a predetermined area of the Martian +surface. The mathematical motion model in such condition of the solar sail's +orientation is considered and used for motion simulation session. Moreover, the +analysis of this motion is performed. Thus, were obtained parameters of the +stationary orbit of the spacecraft with a solar sail and were given +recommendations for further applying spacecrafts to reflect the sunlight on a +planet's surface. +" +Data processing pipeline for Herschel HIFI," {Context}. The HIFI instrument on the Herschel Space Observatory performed +over 9100 astronomical observations, almost 900 of which were calibration +observations in the course of the nearly four-year Herschel mission. The data +from each observation had to be converted from raw telemetry into calibrated +products and were included in the Herschel Science Archive. {Aims}. The HIFI +pipeline was designed to provide robust conversion from raw telemetry into +calibrated data throughout all phases of the HIFI missions. Pre-launch +laboratory testing was supported as were routine mission operations. {Methods}. +A modular software design allowed components to be easily added, removed, +amended and/or extended as the understanding of the HIFI data developed during +and after mission operations. {Results}. The HIFI pipeline processed data from +all HIFI observing modes within the Herschel automated processing environment +as well as within an interactive environment. The same software can be used by +the general astronomical community to reprocess any standard HIFI observation. +The pipeline also recorded the consistency of processing results and provided +automated quality reports. Many pipeline modules were in use since the HIFI +pre-launch instrument level testing. {Conclusions}. Processing in steps +facilitated data analysis to discover and address instrument artefacts and +uncertainties. The availability of the same pipeline components from pre-launch +throughout the mission made for well-understood, tested, and stable processing. +A smooth transition from one phase to the next significantly enhanced +processing reliability and robustness. +" +Regret Analysis for Continuous Dueling Bandit," The dueling bandit is a learning framework wherein the feedback information +in the learning process is restricted to a noisy comparison between a pair of +actions. In this research, we address a dueling bandit problem based on a cost +function over a continuous space. We propose a stochastic mirror descent +algorithm and show that the algorithm achieves an $O(\sqrt{T\log T})$-regret +bound under strong convexity and smoothness assumptions for the cost function. +Subsequently, we clarify the equivalence between regret minimization in dueling +bandit and convex optimization for the cost function. Moreover, when +considering a lower bound in convex optimization, our algorithm is shown to +achieve the optimal convergence rate in convex optimization and the optimal +regret in dueling bandit except for a logarithmic factor. +" +X-ray induced deuterium enrichment on N-rich organics in protoplanetary disks: an experimental investigation using synchrotron light," The deuterium enrichment of organics in the interstellar medium, +protoplanetary disks and meteorites has been proposed to be the result of +ionizing radiation. The goal of this study is to quantify the effects of soft +X-rays (0.1 - 2 keV), a component of stellar radiation fields illuminating +protoplanetary disks, on the refractory organics present in the disks. We +prepared tholins, nitrogen-rich complex organics, via plasma deposition and +used synchrotron radiation to simulate X-ray fluences in protoplanetary disks. +Controlled irradiation experiments at 0.5 and 1.3 keV were performed at the +SEXTANTS beam line of the SOLEIL synchrotron, and were followed by ex-situ +infrared, Raman and isotopic diagnostics. Infrared spectroscopy revealed the +loss of singly-bonded groups (N-H, C-H and R-N$\equiv$C) and the formation of +sp$^3$ carbon defects. Raman analysis revealed the introduction of defects and +structural amorphization. Finally, tholins were measured via secondary ion mass +spectrometry (SIMS), revealing that significant D-enrichment is induced by +X-ray irradiation. Our results are compared to previous experimental studies +involving the thermal degradation and electron irradiation of organics. The +penetration depth of soft X-rays in $\mu$m-sized tholins leads to volume rather +than surface modifications: lower energy X-rays (0.5 keV) induce a larger +D-enrichment than 1.3 keV X-rays, reaching a plateau for doses larger than 5 +$\times$ 10$^{27}$ eV cm$^{-3}$. Our work provides experimental evidence of a +new non-thermal pathway to deuterium fractionation of organic matter. +" +Parisian ruin of Brownian motion risk model over an infinite-time horizon," Let $B(t), t\in \mathbb{R}$ be a standard Brownian motion. In this paper, we +derive the exact asymptotics of the probability of Parisian ruin on infinite +time horizon for the following risk process \begin{align}\label{Rudef} +R_u^{\delta}(t)=e^{\delta t}\left(u+c\int^{t}_{0}e^{-\delta v}d +v-\sigma\int_{0}^{t}e^{-\delta v}d B(v)\right),\quad t\geq0, \end{align} where +$u\geq 0$ is the initial reserve, $\delta\geq0$ is the force of interest, $c>0$ +is the rate of premium and $\sigma>0$ is a volatility factor. +Further, we show the asymptotics of the Parisian ruin time of this risk +process. +" +"Exogeneity tests, incomplete models, weak identification and non-Gaussian distributions: invariance and finite-sample distributional theory"," We study the distribution of Durbin-Wu-Hausman (DWH) and Revankar-Hartley +(RH) tests for exogeneity from a finite-sample viewpoint, under the null and +alternative hypotheses. We consider linear structural models with possibly +non-Gaussian errors, where structural parameters may not be identified and +where reduced forms can be incompletely specified (or nonparametric). On level +control, we characterize the null distributions of all the test statistics. +Through conditioning and invariance arguments, we show that these distributions +do not involve nuisance parameters. In particular, this applies to several test +statistics for which no finite-sample distributional theory is yet available, +such as the standard statistic proposed by Hausman (1978). The distributions of +the test statistics may be non-standard -- so corrections to usual asymptotic +critical values are needed -- but the characterizations are sufficiently +explicit to yield finite-sample (Monte-Carlo) tests of the exogeneity +hypothesis. The procedures so obtained are robust to weak identification, +missing instruments or misspecified reduced forms, and can easily be adapted to +allow for parametric non-Gaussian error distributions. We give a general +invariance result (block triangular invariance) for exogeneity test statistics. +This property yields a convenient exogeneity canonical form and a parsimonious +reduction of the parameters on which power depends. In the extreme case where +no structural parameter is identified, the distributions under the alternative +hypothesis and the null hypothesis are identical, so the power function is +flat, for all the exogeneity statistics. However, as soon as identification +does not fail completely, this phenomenon typically disappears. +" +Pair formation of hard core bosons in flat band systems," Hard core bosons in a large class of one or two dimensional flat band systems +have an upper critical density, below which the ground states can be described +completely. At the critical density, the ground states are Wigner crystals. If +one adds a particle to the system at the critical density, the ground state and +the low lying multi particle states of the system can be described as a Wigner +crystal with an additional pair of particles. The energy band for the pair is +separated from the rest of the multi-particle spectrum. The proofs use a +Gerschgorin type of argument for block diagonally dominant matrices. In certain +one-dimensional or tree-like structures one can show that the pair is +localised, for example in the chequerboard chain. For this one-dimensional +system with periodic boundary condition the energy band for the pair is flat, +the pair is localised. +" +"A Note on the Relationship Between Conditional and Unconditional Independence, and its Extensions for Markov Kernels"," Two known results on the relationship between conditional and unconditional +independence are obtained as a consequence of the main result of this paper, a +theorem that uses independence of Markov kernels to obtain a minimal condition +which added to conditional independence implies independence. Some +counterexamples and representation results are provided to clarify the concepts +introduced and the propositions of the statement of the main theorem. Moreover, +conditional independence and the mentioned results are extended to the +framework of Markov kernels. +" +The Effect of Temperature on Amdahl Law in 3D Multicore Era," This work studies the influence of temperature on performance and scalability +of 3D Chip Multiprocessors (CMP) from Amdahl law perspective. We find that 3D +CMP may reach its thermal limit before reaching its maximum power. We show that +a high level of parallelism may lead to high peak temperatures even in small +scale 3D CMPs, thus limiting 3D CMP scalability and calling for different, +in-memory computing architectures. +" +Video Object Segmentation using Supervoxel-Based Gerrymandering," Pixels operate locally. Superpixels have some potential to collect +information across many pixels; supervoxels have more potential by implicitly +operating across time. In this paper, we explore this well established notion +thoroughly analyzing how supervoxels can be used in place of and in conjunction +with other means of aggregating information across space-time. Focusing on the +problem of strictly unsupervised video object segmentation, we devise a method +called supervoxel gerrymandering that links masks of foregroundness and +backgroundness via local and non-local consensus measures. We pose and answer a +series of critical questions about the ability of supervoxels to adequately +sway local voting; the questions regard type and scale of supervoxels as well +as local versus non-local consensus, and the questions are posed in a general +way so as to impact the broader knowledge of the use of supervoxels in video +understanding. We work with the DAVIS dataset and find that our analysis yields +an unsupervised method that outperforms all other known unsupervised methods +and even many supervised ones. +" +The Absent-Minded Driver Problem Redux," This paper reconsiders the problem of the absent-minded driver who must +choose between alternatives with different payoff with imperfect recall and +varying degrees of knowledge of the system. The classical absent-minded driver +problem represents the case with limited information and it has bearing on the +general area of communication and learning, social choice, mechanism design, +auctions, theories of knowledge, belief, and rational agency. Within the +framework of extensive games, this problem has applications to many artificial +intelligence scenarios. It is obvious that the performance of the agent +improves as information available increases. It is shown that a non-uniform +assignment strategy for successive choices does better than a fixed probability +strategy. We consider both classical and quantum approaches to the problem. We +argue that the superior performance of quantum decisions with access to +entanglement cannot be fairly compared to a classical algorithm. If the +cognitive systems of agents are taken to have access to quantum resources, or +have a quantum mechanical basis, then that can be leveraged into superior +performance. +" +Real-time Acceleration-continuous Path-constrained Trajectory Planning With Built-in Tradability Between Cruise and Time-optimal Motions," In this paper, a novel real-time acceleration-continuous path-constrained +trajectory planning algorithm is proposed with an appealing built-in +tradability mechanism between cruise motion and time-optimal motion. Different +from existing approaches, the proposed approach smoothens time-optimal +trajectories with bang-bang input structures to generate +acceleration-continuous trajectories while preserving the completeness +property. More importantly, a novel built-in tradability mechanism is proposed +and embedded into the trajectory planning framework, so that the proportion of +the cruise motion and time-optimal motion can be flexibly adjusted by changing +a user-specified functional parameter. Thus, the user can easily apply the +trajectory planning algorithm for various tasks with different requirements on +motion efficiency and cruise proportion. Moreover, it is shown that feasible +trajectories are computed more quickly than optimal trajectories. Rigorous +mathematical analysis and proofs are provided for these aforementioned results. +Comparative simulation and experimental results on omnidirectional wheeled +mobile robots demonstrate the capability of the proposed algorithm in terms of +flexible tunning between cruise and time-optimal motions, as well as higher +computational efficiency. +" +Multi-locus data distinguishes between population growth and multiple merger coalescents," We introduce a low dimensional function of the site frequency spectrum that +is tailor-made for distinguishing coalescent models with multiple mergers from +Kingman coalescent models with population growth, and use this function to +construct a hypothesis test between these model classes. The null and +alternative sampling distributions of the statistic are intractable, but its +low dimensionality renders them amenable to Monte Carlo estimation. We +construct kernel density estimates of the sampling distributions based on +simulated data, and show that the resulting hypothesis test dramatically +improves on the statistical power of a current state-of-the-art method. A key +reason for this improvement is the use of multi-locus data, in particular +averaging observed site frequency spectra across unlinked loci to reduce +sampling variance. We also demonstrate the robustness of our method to nuisance +and tuning parameters. Finally we show that the same kernel density estimates +can be used to conduct parameter estimation, and argue that our method is +readily generalisable for applications in model selection, parameter inference +and experimental design. +" +Ramsey interferometry of Rydberg ensembles inside microwave cavities," We study ensembles of Rydberg atoms in a confined electromagnetic environment +such as provided by a microwave cavity. The competition between standard free +space Ising type and cavity-mediated interactions leads to the emergence of +different regimes where the particle-particle couplings range from the typical +van der Waals $r^{-6}$ behavior to $r^{-3}$ and to $r$-independence. We apply a +Ramsey spectroscopic technique to map the two-body interactions into a +characteristic signal such as intensity and contrast decay curves. As opposed +to previous treatments requiring high-densities for considerable contrast and +phase decay, the cavity scenario can exhibit similar behavior at much lower +densities. +" +Impurity self-energy in the strongly-correlated Bose systems," We proposed the non-perturbative scheme for calculation of the impurity +spectrum in the Bose system at zero temperature. The method is based on the +path-integral formulation and describes an impurity as a zero-density ideal +Fermi gas interacting with Bose system for which the action is written in terms +of density fluctuations. On the example of the $^3$He atom immersed in the +liquid helium-4 a good consistency with experimental data and results of Monte +Carlo simulations is shown. +" +$k$-step correction for mixed integer linear programming: a new approach for instrumental variable quantile regressions and related problems," This paper proposes a new framework for estimating instrumental variable (IV) +quantile models. The first part of our proposal can be cast as a mixed integer +linear program (MILP), which allows us to capitalize on recent progress in +mixed integer optimization. The computational advantage of the proposed method +makes it an attractive alternative to existing estimators in the presence of +multiple endogenous regressors. This is a situation that arises naturally when +one endogenous variable is interacted with several other variables in a +regression equation. In our simulations, the proposed method using MILP with a +random starting point can reliably estimate regressions for a sample size of +500 with 20 endogenous variables in 5 seconds. Theoretical results for early +termination of MILP are also provided. The second part of our proposal is a +$k$-step correction framework, which is proved to be able to convert any point +within a small but fixed neighborhood of the true parameter value into an +estimate that is asymptotically equivalent to GMM. Our result does not require +the initial estimate to be consistent and only $2\log n$ iterations are needed. +Since the $k$-step correction does not require any optimization, applying the +$k$-step correction to MILP estimate provides a computationally attractive way +of obtaining efficient estimators. When dealing with very large data sets, we +can run the MILP algorithm on only a small subsample and our theoretical +results guarantee that the resulting estimator from the $k$-step correction is +equivalent to computing GMM on the full sample. As a result, we can handle +massive datasets of millions of observations within seconds. As an empirical +illustration, we examine the heterogeneous treatment effect of Job Training +Partnership Act (JTPA) using a regression with 13 interaction terms of the +treatment variable. +" +An MCMC Algorithm for Estimating the Reduced RUM," The RRUM is a model that is frequently seen in language assessment studies. +The objective of this research is to advance an MCMC algorithm for the Bayesian +RRUM. The algorithm starts with estimating correlated attributes. Using a +saturated model and a binary decimal conversion, the algorithm transforms +possible attribute patterns to a Multinomial distribution. Along with the +likelihood of an attribute pattern, a Dirichlet distribution is used as the +prior to sample from the posterior. The Dirichlet distribution is constructed +using Gamma distributions. Correlated attributes of examinees are generated +using the inverse transform sampling. Model parameters are estimated using the +Metropolis within Gibbs sampler sequentially. Two simulation studies are +conducted to evaluate the performance of the algorithm. The first simulation +uses a complete and balanced Q-matrix that measures 5 attributes. Comprised of +28 items and 9 attributes, the Q-matrix for the second simulation is incomplete +and imbalanced. The empirical study uses the ECPE data obtained from the CDM R +package. Parameter estimates from the MCMC algorithm and from the CDM R package +are presented and compared. The algorithm developed in this research is +implemented in R. +" +Stretching p-wave molecules by transverse confinements," We revisit the confinement-induced p-wave resonance in quasi-one-dimensional +(quasi-1D) atomic gases and study the induced molecules near resonance. We +derive the reduced 1D interaction parameters and show that they can well +predict the binding energy of shallow molecules in quasi-1D system. +Importantly, these shallow molecules are found to be much more spatially +extended compared to those in three dimensions (3D) without transverse +confinement. Our results strongly indicate that a p-wave interacting atomic gas +can be much more stable in quasi-1D near the induced p-wave resonance, where +most weight of the molecule lies outside the short-range regime and thus the +atom loss could be suppressed. +" +A Conceptual Framework for Supporting a Rapid Design of Web Applications for Data Analysis of Electrical Quality Assurance Data for the LHC," The Large Hadron Collider (LHC) is one of the most complex machines ever +build. It is composed of many components which constitute a large system. The +tunnel and the accelerator is just one of a very critical fraction of the whole +LHC infrastructure. Hardware comissioning as one of the critical processes +before running the LHC is implemented during the Long Shutdown (LS) states of +the macine, where Electrical Quality Assurance (ELQA) is one of its key +components. Here a huge data is collected when implementing various ELQA +electrical tests. In this paper we present a conceptual framework for +supporting a rapid design of web applications for ELQA data analysis. We show a +framework's main components, their possible integration with other systems and +machine learning algorithms and a simple use case of prototyping an application +for Electrical Quality Assurance of the LHC. +" +The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations Annotated with Compositional Meaning Representations," The Parallel Meaning Bank is a corpus of translations annotated with shared, +formal meaning representations comprising over 11 million words divided over +four languages (English, German, Italian, and Dutch). Our approach is based on +cross-lingual projection: automatically produced (and manually corrected) +semantic annotations for English sentences are mapped onto their word-aligned +translations, assuming that the translations are meaning-preserving. The +semantic annotation consists of five main steps: (i) segmentation of the text +in sentences and lexical items; (ii) syntactic parsing with Combinatory +Categorial Grammar; (iii) universal semantic tagging; (iv) symbolization; and +(v) compositional semantic analysis based on Discourse Representation Theory. +These steps are performed using statistical models trained in a semi-supervised +manner. The employed annotation models are all language-neutral. Our first +results are promising. +" +A nonparametric copula approach to conditional Value-at-Risk," Value-at-Risk and its conditional allegory, which takes into account the +available information about the economic environment, form the centrepiece of +the Basel framework for the evaluation of market risk in the banking sector. In +this paper, a new nonparametric framework for estimating this conditional +Value-at-Risk is presented. A nonparametric approach is particularly pertinent +as the traditionally used parametric distributions have been shown to be +insufficiently robust and flexible in most of the equity-return data sets +observed in practice. The method extracts the quantile of the conditional +distribution of interest, whose estimation is based on a novel estimator of the +density of the copula describing the dynamic dependence observed in the series +of returns. Real-world back-testing analyses demonstrate the potential of the +approach, whose performance may be superior to its industry counterparts. +" +Towards High-quality Visualization of Superfluid Vortices," Superfluidity is a special state of matter exhibiting macroscopic quantum +phenomena and acting like a fluid with zero viscosity. In such a state, +superfluid vortices exist as phase singularities of the model equation with +unique distributions. This paper presents novel techniques to aid the visual +understanding of superfluid vortices based on the state-of-the-art non-linear +Klein-Gordon equation, which evolves a complex scalar field, giving rise to +special vortex lattice/ring structures with dynamic vortex formation, +reconnection, and Kelvin waves, etc. By formulating a numerical model with +theoretical physicists in superfluid research, we obtain high-quality +superfluid flow data sets without noise-like waves, suitable for vortex +visualization. By further exploring superfluid vortex properties, we develop a +new vortex identification and visualization method: a novel mechanism with +velocity circulation to overcome phase singularity and an orthogonal-plane +strategy to avoid ambiguity. Hence, our visualizations can help reveal various +superfluid vortex structures and enable domain experts for related visual +analysis, such as the steady vortex lattice/ring structures, dynamic vortex +string interactions with reconnections and energy radiations, where the famous +Kelvin waves and decaying vortex tangle were clearly observed. These +visualizations have assisted physicists to verify the superfluid model, and +further explore its dynamic behavior more intuitively. +" +Nonrepetitive edge-colorings of trees," A repetition is a sequence of symbols in which the first half is the same as +the second half. An edge-coloring of a graph is repetition-free or +nonrepetitive if there is no path with a color pattern that is a repetition. +The minimum number of colors so that a graph has a nonrepetitive edge-coloring +is called its Thue edge-chromatic number. +We improve on the best known general upper bound of $4\Delta-4$ for the Thue +edge-chromatic number of trees of maximum degree $\Delta$ due to Alon, +Grytczuk, Ha{\l}uszczak and Riordan (2002) by providing a simple nonrepetitive +edge-coloring with $3\Delta-2$ colors. +" +On the new wave behavior to the longitudinal wave quation in a magneto-electro-elastic circular rod," With the aid of the symbolic computations software; Wolfram Mathematica 9, +the powerful sine-Gordon expansion method is used in examining the analytical +solution of the longitudinal wave equation in a magneto-electro-elastic +circular rod. Sine-Gordon expansion method is based on the well-known +sine-Gordon equation and a wave transformation. The longitudinal wave equation +is an equation that arises in mathematical physics with dispersion caused by +the transverse Poisson's effect in a magneto-electro-elastic circular rod. We +successfully get some solutions with the complex, trigonometric and hyperbolic +function structure. We present the numerical simulations of all the obtained +solutions by choosing appropriate values of the parameters. We give the +physical meanings of some of the obtained analytical solutions which +significantly explain some practical physical problems. +" +Formulation of Deep Reinforcement Learning Architecture Toward Autonomous Driving for On-Ramp Merge," Multiple automakers have in development or in production automated driving +systems (ADS) that offer freeway-pilot functions. This type of ADS is typically +limited to restricted-access freeways only, that is, the transition from manual +to automated modes takes place only after the ramp merging process is completed +manually. One major challenge to extend the automation to ramp merging is that +the automated vehicle needs to incorporate and optimize long-term objectives +(e.g. successful and smooth merge) when near-term actions must be safely +executed. Moreover, the merging process involves interactions with other +vehicles whose behaviors are sometimes hard to predict but may influence the +merging vehicle optimal actions. To tackle such a complicated control problem, +we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an +optimal driving policy by maximizing the long-term reward in an interactive +environment. Specifically, we apply a Long Short-Term Memory (LSTM) +architecture to model the interactive environment, from which an internal state +containing historical driving information is conveyed to a Deep Q-Network +(DQN). The DQN is used to approximate the Q-function, which takes the internal +state as input and generates Q-values as output for action selection. With this +DRL architecture, the historical impact of interactive environment on the +long-term reward can be captured and taken into account for deciding the +optimal control policy. The proposed architecture has the potential to be +extended and applied to other autonomous driving scenarios such as driving +through a complex intersection or changing lanes under varying traffic flow +conditions. +" +CDS Rate Construction Methods by Machine Learning Techniques," Regulators require financial institutions to estimate counterparty default +risks from liquid CDS quotes for the valuation and risk management of OTC +derivatives. However, the vast majority of counterparties do not have liquid +CDS quotes and need proxy CDS rates. Existing methods cannot account for +counterparty-specific default risks; we propose to construct proxy CDS rates by +associating to illiquid counterparty liquid CDS Proxy based on Machine Learning +Techniques. After testing 156 classifiers from 8 most popular classifier +families, we found that some classifiers achieve highly satisfactory accuracy +rates. Furthermore, we have rank-ordered the performances and investigated +performance variations amongst and within the 8 classifier families. This paper +is, to the best of our knowledge, the first systematic study of CDS Proxy +construction by Machine Learning techniques, and the first systematic +classifier comparison study based entirely on financial market data. Its +findings both confirm and contrast existing classifier performance literature. +Given the typically highly correlated nature of financial data, we investigated +the impact of correlation on classifier performance. The techniques used in +this paper should be of interest for financial institutions seeking a CDS Proxy +method, and can serve for proxy construction for other financial variables. +Some directions for future research are indicated. +" +Eigentriads and Eigenprogressions on the Tonnetz," We introduce a new multidimensional representation, named eigenprogression +transform, that characterizes some essential patterns of Western tonal harmony +while being equivariant to time shifts and pitch transpositions. This +representation is deep, multiscale, and convolutional in the piano-roll domain, +yet incurs no prior training, and is thus suited to both supervised and +unsupervised MIR tasks. The eigenprogression transform combines ideas from the +spiral scattering transform, spectral graph theory, and wavelet shrinkage +denoising. We report state-of-the-art results on a task of supervised composer +recognition (Haydn vs. Mozart) from polyphonic music pieces in MIDI format. +" +A generalization of the Log Lindley distribution -- its properties and applications," An extension of the two-parameter Log-Lindley distribution of Gomez et al. +(2014) with support in (0, 1) is proposed. Its important properties like +cumulative distribution function, moments, survival function, hazard rate +function, Shannon entropy, stochastic n ordering and convexity (concavity) +conditions are derived. An application in distorted premium principal is +outlined and parameter estimation by method of maximum likelihood is also +presented. We also consider use of a re-parameterized form of the proposed +distribution in regression modeling for bounded responses by considering a +modeling of real life data in comparison with beta regression and log-Lindley +regression models. +" +A Flux Conserving Meshfree Method for Conservation Laws," Lack of conservation has been the biggest drawback in meshfree generalized +finite difference methods (GFDMs). In this paper, we present a novel +modification of classical meshfree GFDMs to include local balances which +produce an approximate conservation of numerical fluxes. This numerical flux +conservation is done within the usual moving least squares framework. Unlike +Finite Volume Methods, it is based on locally defined control cells, rather +than a globally defined mesh. We present the application of this method to an +advection diffusion equation and the incompressible Navier - Stokes equations. +Our simulations show that the introduction of flux conservation significantly +reduces the errors in conservation in meshfree GFDMs. +" +Anomaly detection in the dynamics of web and social networks," In this work, we propose a new, fast and scalable method for anomaly +detection in large time-evolving graphs. It may be a static graph with dynamic +node attributes (e.g. time-series), or a graph evolving in time, such as a +temporal network. We define an anomaly as a localized increase in temporal +activity in a cluster of nodes. The algorithm is unsupervised. It is able to +detect and track anomalous activity in a dynamic network despite the noise from +multiple interfering sources. We use the Hopfield network model of memory to +combine the graph and time information. We show that anomalies can be spotted +with a good precision using a memory network. The presented approach is +scalable and we provide a distributed implementation of the algorithm. To +demonstrate its efficiency, we apply it to two datasets: Enron Email dataset +and Wikipedia page views. We show that the anomalous spikes are triggered by +the real-world events that impact the network dynamics. Besides, the structure +of the clusters and the analysis of the time evolution associated with the +detected events reveals interesting facts on how humans interact, exchange and +search for information, opening the door to new quantitative studies on +collective and social behavior on large and dynamic datasets. +" +Local coefficients and the Herbert Formula," We discuss a generalisation of the Herbert formula for double points, when +the normal bundle of an immersion admits an additional structure, and an +application. +" +Emission line ratios of Fe III as astrophysical plasma diagnostics," Recent state-of-the-art calculations of A-values and electron impact +excitation rates for Fe III are used in conjunction with the Cloudy modeling +code to derive emission line intensity ratios for optical transitions among the +fine-structure levels of the 3d$^6$ configuration. A comparison of these with +high resolution, high signal-to-noise spectra of gaseous nebulae reveals that +previous discrepancies found between theory and observation are not fully +resolved by the latest atomic data. Blending is ruled out as a likely cause of +the discrepancies, because temperature- and density-independent ratios (arising +from lines with common upper levels) match well with those predicted by theory. +For a typical nebular plasma with electron temperature $T_{\rm e} = 9000$ K and +electron density $\rm N_{e}=10^4 \, cm^{-3}$, cascading of electrons from the +levels $\rm ^3G_5$, $\rm ^3G_4$ and $\rm ^3G_3$ plays an important role in +determining the populations of lower levels, such as $\rm ^3F_4$, which provide +the density diagnostic emission lines of Fe III, such as $\rm ^5D_4$ - $\rm +^3F_4$ at 4658 \AA. Hence further work on the A-values for these transitions is +recommended, ideally including measurements if possible. However, some Fe III +ratios do provide reliable $N_{\rm e}$-diagnostics, such as 4986/4658. The Fe +III cooling function calculated with Cloudy using the most recent atomic data +is found to be significantly greater at $T_e$ $\simeq$ 30000 K than predicted +with the existing Cloudy model. This is due to the presence of additional +emission lines with the new data, particularly in the 1000--4000 \AA\ +wavelength region. +" +"Warped cones, (non-)rigidity, and piecewise properties, with a joint appendix with Dawid Kielak"," We prove that if a quasi-isometry of warped cones is induced by a map between +the base spaces of the cones, the actions must be conjugate by this map. The +converse is false in general, conjugacy of actions is not sufficient for +quasi-isometry of the respective warped cones. For a general quasi-isometry of +warped cones, using the asymptotically faithful covering constructed in a +previous work with Jianchao Wu, we deduce that the two groups are +quasi-isometric after taking Cartesian products with suitable powers of the +integers. +Secondly, we characterise geometric properties of a group (coarse +embeddability into Banach spaces, asymptotic dimension, property A) by +properties of the warped cone over an action of this group. These results apply +to arbitrary asymptotically faithful coverings, in particular to box spaces. As +an application, we calculate the asymptotic dimension of a warped cone, improve +bounds by Szabó, Wu, and Zacharias and by Bartels on the amenability +dimension of actions of virtually nilpotent groups, and give a partial answer +to a question of Willett about dynamic asymptotic dimension. +In the appendix, we justify optimality of the aforementioned result on +general quasi-isometries by showing that quasi-isometric warped cones need not +come from quasi-isometric groups, contrary to the case of box spaces. +" +Enhancing MapReduce Fault Recovery Through Binocular Speculation," MapReduce speculation plays an important role in finding potential task +stragglers and failures. But a tacit dichotomy exists in MapReduce due to its +inherent two-phase (map and reduce) management scheme in which map tasks and +reduce tasks have distinctly different execution behaviors, yet reduce tasks +are dependent on the results of map tasks. We reveal that speculation policies +for fault handling in MapReduce do not recognize this dichotomy between map and +reduce tasks, which leads to an issue of speculation myopia for MapReduce fault +recovery. These issues cause significant performance degradation upon network +and node failures. To address the speculation myopia caused by MapReduce +dichotomy, we introduce a new scheme called binocular speculation to help +MapReduce increase its assessment scope for speculation. As part of the scheme, +we also design three component techniques including neighborhood glance, +collective speculation and speculative rollback. Our evaluation shows that, +with these techniques, binocular speculation can increase the coordination of +map and reduce phases, and enhance the efficiency of MapReduce fault recovery. +" +Generalized classes of continuous symmetries in two-mode Dicke models," As recently realized experimentally [Léonard et al., Nature 543, 87 +(2017)], one can engineer models with continuous symmetries by coupling two +cavity modes to trapped atoms, via a Raman pumping geometry. Considering +specifically cases where internal states of the atoms couple to the cavity, we +show an extended range of parameters for which continuous symmetry breaking can +occur, and we classify the distinct steady states and time-dependent states +that arise for different points in this extended parameter regime. +" +The strong Prikry property," I isolate a combinatorial property of a poset $\mathbb{P}$ that I call the +strong Prikry property, which implies the existence of an ultrafilter on the +complete Boolean algebra $\mathbb{B}$ of $\mathbb{P}$ such that one inclusion +of the Boolean ultrapower version of the so-called \Bukovsky-Dehornoy +phenomenon holds with respect to $\mathbb{B}$ and $U$. I show that in all cases +that were previously studied, and for which it was shown that they come with a +canonical iterated ultrapower construction whose limit can be described as a +single Boolean ultrapower, the posets in question satisfy this property: Prikry +forcing, Magidor forcing and generalized Prikry forcing. +" +Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms," We consider the problem of optimization of cost functionals on the +infinite-dimensional manifold of diffeomorphisms. We present a new class of +optimization methods, valid for any optimization problem setup on the space of +diffeomorphisms by generalizing Nesterov accelerated optimization to the +manifold of diffeomorphisms. While our framework is general for infinite +dimensional manifolds, we specifically treat the case of diffeomorphisms, +motivated by optical flow problems in computer vision. This is accomplished by +building on a recent variational approach to a general class of accelerated +optimization methods by Wibisono, Wilson and Jordan, which applies in finite +dimensions. We generalize that approach to infinite dimensional manifolds. We +derive the surprisingly simple continuum evolution equations, which are partial +differential equations, for accelerated gradient descent, and relate it to +simple mechanical principles from fluid mechanics. Our approach has natural +connections to the optimal mass transport problem. This is because one can +think of our approach as an evolution of an infinite number of particles +endowed with mass (represented with a mass density) that moves in an energy +landscape. The mass evolves with the optimization variable, and endows the +particles with dynamics. This is different than the finite dimensional case +where only a single particle moves and hence the dynamics does not depend on +the mass. We derive the theory, compute the PDEs for accelerated optimization, +and illustrate the behavior of these new accelerated optimization schemes. +" +Learning Generalizable Robot Skills from Demonstrations in Cluttered Environments," Learning from Demonstration (LfD) is a popular approach to endowing robots +with skills without having to program them by hand. Typically, LfD relies on +human demonstrations in clutter-free environments. This prevents the +demonstrations from being affected by irrelevant objects, whose influence can +obfuscate the true intention of the human or the constraints of the desired +skill. However, it is unrealistic to assume that the robot's environment can +always be restructured to remove clutter when capturing human demonstrations. +To contend with this problem, we develop an importance weighted batch and +incremental skill learning approach, building on a recent inference-based +technique for skill representation and reproduction. Our approach reduces +unwanted environmental influences on the learned skill, while still capturing +the salient human behavior. We provide both batch and incremental versions of +our approach and validate our algorithms on a 7-DOF JACO2 manipulator with +reaching and placing skills. +" +Women are slightly more cooperative than men (in one-shot Prisoner's dilemma games played online)," Differences between men and women have intrigued generations of social +scientists, who have found that the two sexes behave differently in settings +requiring competition, risk taking, altruism, honesty, as well as many others. +Yet, little is known about whether there are gender differences in cooperative +behavior. Previous evidence is mixed and inconclusive. Here I shed light on +this topic by analyzing the totality of studies that my research group has +conducted since 2013. This is a dataset of 10,951 observations coming from +7,322 men and women living in the US, recruited through Amazon Mechanical Turk, +and who passed four comprehension questions to make sure they understand the +cooperation problem (a one-shot prisoner's dilemma). The analysis demonstrates +that women are more cooperative than men. The effect size is small (about 4 +percentage points, and this might explain why previous studies failed to detect +it) but highly significant (p<.0001). +" +Relaxation of Radiation-Driven Two-Level Systems Interacting with a Bose-Einstein Condensate Bath," We develop a microscopic theory for the relaxation dynamics of an optically +pumped two-level system (TLS) coupled to a bath of weakly interacting Bose gas. +Using Keldysh formalism and diagrammatic perturbation theory, expressions for +the relaxation times of the TLS Rabi oscillations are derived when the boson +bath is in the normal state and the Bose-Einstein condensate (BEC) state. We +apply our general theory to consider an irradiated quantum dot coupled with a +boson bath consisting of a two-dimensional dipolar exciton gas. When the bath +is in the BEC regime, relaxation of the Rabi oscillations is due to both +condensate and non-condensate fractions of the bath bosons for weak TLS-light +coupling and dominantly due to the non-condensate fraction for strong TLS-light +coupling. Our theory also shows that a phase transition of the bath from the +normal to the BEC state strongly influences the relaxation rate of the TLS Rabi +oscillations. The TLS relaxation rate is approximately independent of the pump +field frequency and monotonically dependent on the field strength when the bath +is in the low-temperature regime of the normal phase. Phase transition of the +dipolar exciton gas leads to a non-monotonic dependence of the TLS relaxation +rate on both the pump field frequency and field strength, providing a +characteristic signature for the detection of BEC phase transition of the +coupled dipolar exciton gas. +" +On the approximation of convex bodies by ellipses with respect to the symmetric difference metric," Given a centrally symmetric convex body $K \subset \mathbb{R}^d$ and a +positive number $\lambda$, we consider, among all ellipsoids $E \subset +\mathbb{R}^d$ of volume $\lambda$, those that best approximate $K$ with respect +to the symmetric difference metric, or equivalently that maximize the volume of +$E\cap K$: these are the maximal intersection (MI) ellipsoids introduced by +Artstein-Avidan and Katzin. The question of uniqueness of MI ellipsoids (under +the obviously necessary assumption that $\lambda$ is between the volumes of the +John and the Loewner ellipsoids of $K$) is open in general. We provide a +positive answer to this question in dimension $d=2$. Therefore we obtain a +continuous $1$-parameter family of ellipses interpolating between the John and +the Loewner ellipses of $K$. In order to prove uniqueness, we show that the +area $I_K(E)$ of the intersection $K \cap E$ is a strictly quasiconcave +function of the ellipse $E$, with respect to the natural affine structure on +the set of ellipses of area $\lambda$. The proof relies on smoothening $K$, +putting it in general position, and obtaining uniform estimates for certain +derivatives of the function $I_K(.)$. Finally, we provide a characterization of +maximal intersection positions, that is, the situation where the MI ellipse of +$K$ is the unit disk, under the assumption that the two boundaries are +transverse. +" +POLAMI: Polarimetric Monitoring of Active Galactic Nuclei at Millimetre Wavelengths. II. Widespread circular polarisation," We analyse the circular polarisation data accumulated in the first 7 years of +the POLAMI project introduced in an accompanying paper (Agudo et al.). In the +3mm wavelength band, we acquired more than 2600 observations, and all but one +of our 37 sample sources were detected, most of them several times. For most +sources, the observed distribution of the degree of circular polarisation is +broader than that of unpolarised calibrators, indicating that weak (<0.5%) +circular polarisation is present most of the time. Our detection rate and the +maximum degree of polarisation found, 2.0%, are comparable to previous surveys, +all made at much longer wavelengths. We argue that the process generating +circular polarisation must not be strongly wavelength dependent, and we propose +that the widespread presence of circular polarisation in our short wavelength +sample dominated by blazars is mostly due to Faraday conversion of the linearly +polarised synchrotron radiation in the helical magnetic field of the jet. +Circular polarisation is variable, most notably on time scales comparable to or +shorter than our median sampling interval <1 month. Longer time scales of about +one year are occasionally detected, but severely limited by the weakness of the +signal. At variance with some longer wavelength investigations we find that the +sign of circular polarisation changes in most sources, while only 7 sources, +including 3 already known, have a strong preference for one sign. The degrees +of circular and linear polarisation do not show any systematic correlation. We +do find however one particular event where the two polarisation degrees vary in +synchronism during a time span of 0.9 years. The paper also describes a novel +method for calibrating the sign of circular polarisation observations. +" +Single Classifier-based Passive System for Source Printer Classification using Local Texture Features," An important aspect of examining printed documents for potential forgeries +and copyright infringement is the identification of source printer as it can be +helpful for ascertaining the leak and detecting forged documents. This paper +proposes a system for classification of source printer from scanned images of +printed documents using all the printed letters simultaneously. This system +uses local texture patterns based features and a single classifier for +classifying all the printed letters. Letters are extracted from scanned images +using connected component analysis followed by morphological filtering without +the need of using an OCR. Each letter is sub-divided into a flat region and an +edge region, and local tetra patterns are estimated separately for these two +regions. A strategically constructed pooling technique is used to extract the +final feature vectors. The proposed method has been tested on both a publicly +available dataset of 10 printers and a new dataset of 18 printers scanned at a +resolution of 600 dpi as well as 300 dpi printed in four different fonts. The +results indicate shape independence property in the proposed method as using a +single classifier it outperforms existing handcrafted feature-based methods and +needs much smaller number of training pages by using all the printed letters. +" +Heat spreader with parallel microchannel configurations employing nanofluids for near active cooling of MEMS," While parallel microchannel based cooling systems have been around for quite +a period of time, employing the same and incorporating them for near active +cooling of microelectronic devices is yet to be implemented and the +implications of the same on thermal mitigation to be understood. The present +article focusses on a specific design of the PMCS such that it can be +implemented at ease on the heat spreader of a modern microprocessor to obtain +near active cooling. Extensive experimental and numerical studies have been +carried out to comprehend the same and three different flow configurations of +PMCS have been adopted for the present investigations. Additional to focussing +on the thermofluidics due to flow configuration, nanofluids have also been +employed to achieve the desired essentials of mitigation of overshoot +temperatures and improving uniformity of cooling. Two modelling methods, +Discrete Phase Modelling and Effective Property Modelling have been employed +for numerical study to model nanofluids as working fluid in micro flow paths +and the DPM predictions have been observed to match accurately with +experiments. To quantify the thermal performance of PMCS, an appropriate Figure +of Merit has been proposed. From the FoM It has been perceived that the Z +configuration employing nanofluid is the best suitable solutions for uniform +thermal loads to achieve uniform cooling as well as reducing maximum +temperature produced with in the device. The present results are very promising +and viable approach for futuristic thermal mitigation of microprocessor +systems. +" +A review of asymptotic theory of estimating functions," Asymptotic statistical theory for estimating functions is reviewed in a +generality suitable for stochastic processes. Conditions concerning existence +of a consistent estimator, uniqueness, rate of convergence, and the asymptotic +distribution are treated separately. Our conditions are not minimal, but can be +verified for many interesting stochastic process models. Several examples +illustrate the wide applicability of the theory and why the generality is +needed. +" +Multiview Deep Learning for Predicting Twitter Users' Location," The problem of predicting the location of users on large social networks like +Twitter has emerged from real-life applications such as social unrest detection +and online marketing. Twitter user geolocation is a difficult and active +research topic with a vast literature. Most of the proposed methods follow +either a content-based or a network-based approach. The former exploits +user-generated content while the latter utilizes the connection or interaction +between Twitter users. In this paper, we introduce a novel method combining the +strength of both approaches. Concretely, we propose a multi-entry neural +network architecture named MENET leveraging the advances in deep learning and +multiview learning. The generalizability of MENET enables the integration of +multiple data representations. In the context of Twitter user geolocation, we +realize MENET with textual, network, and metadata features. Considering the +natural distribution of Twitter users across the concerned geographical area, +we subdivide the surface of the earth into multi-scale cells and train MENET +with the labels of the cells. We show that our method outperforms the state of +the art by a large margin on three benchmark datasets. +" +"Designing the Optimal Bit: Balancing Energetic Cost, Speed and Reliability"," We consider the technologically relevant costs of operating a reliable bit +that can be erased rapidly. We find that both erasing and reliability times are +non-monotonic in the underlying friction, leading to a trade-off between +erasing speed and bit reliability. Fast erasure is possible at the expense of +low reliability at moderate friction, and high reliability comes at the expense +of slow erasure in the underdamped and overdamped limits. Within a given class +of bit parameters and control strategies, we define ""optimal"" designs of bits +that meet the desired reliability and erasing time requirements with the lowest +operational work cost. We find that optimal designs always saturate the bound +on the erasing time requirement, but can exceed the required reliability time +if critically damped. The non-trivial geometry of the reliability and erasing +time-scales allows us to exclude large regions of parameter space as +sub-optimal. We find that optimal designs are either critically damped or close +to critical damping under the erasing procedure. +" +Economic Factors of Vulnerability Trade and Exploitation," Cybercrime markets support the development and diffusion of new attack +technologies, vulnerability exploits, and malware. Whereas the revenue streams +of cyber attackers have been studied multiple times in the literature, no +quantitative account currently exists on the economics of attack acquisition +and deployment. Yet, this understanding is critical to characterize the +production of (traded) exploits, the economy that drives it, and its effects on +the overall attack scenario. In this paper we provide an empirical +investigation of the economics of vulnerability exploitation, and the effects +of market factors on likelihood of exploit. Our data is collected +first-handedly from a prominent Russian cybercrime market where the trading of +the most active attack tools reported by the security industry happens. Our +findings reveal that exploits in the underground are priced similarly or above +vulnerabilities in legitimate bug-hunting programs, and that the refresh cycle +of exploits is slower than currently often assumed. On the other hand, +cybercriminals are becoming faster at introducing selected vulnerabilities, and +the market is in clear expansion both in terms of players, traded exploits, and +exploit pricing. We then evaluate the effects of these market variables on +likelihood of attack realization, and find strong evidence of the correlation +between market activity and exploit deployment. We discuss implications on +vulnerability metrics, economics, and exploit measurement. +" +Complete classification of generalized crossing changes between GOF-knots," We show that the monodromy for a genus one, fibered knot can have at most two +monodromy equivalence classes of once-unclean arcs. We use this to classify all +monodromies of genus one, fibered knots that possess once-unclean arcs, all +manifolds containing genus one fibered knots with generalized crossing changes +resulting in another genus one fibered knot, and all generalized crossing +changes between two genus one, fibered knots. +" +Strain Mode of General Flow: Characterization and Implications for Flow Pattern Structures," Understanding the mixing capability of mixing devices based on their +geometric shape is an important issue both for predicting mixing processes and +for designing new mixers. The flow patterns in mixers are directly connected +with the modes of the local strain rate, which is generally a combination of +elongational flow and planar shear flow. We develop a measure to characterize +the modes of the strain rate for general flow occurring in mixers. The spatial +distribution of the volumetric strain rate (or non-planar strain rate) in +connection with the flow pattern plays an essential role in understanding +distributive mixing. With our measure, flows with different types of screw +elements in a twin-screw extruder are numerically analyzed. The difference in +flow pattern structure between conveying screws and kneading disks is +successfully characterized by the distribution of the volumetric strain rate. +The results suggest that the distribution of the strain rate mode offers an +essential and convenient way for characterization of the relation between flow +pattern structure and the mixer geometry. +" +$\mathbb{Z}^2$-algebras as noncommutative blow-ups," The goal of this note is to first prove that for a well behaved +$\mathbb{Z}^2$-algebra $R$, the category $QGr(R) := Gr(R)/Tors(R)$ is +equivalent to $QGr(R_\Delta)$ where $R_\Delta$ is a diagonal-like +sub-$\mathbb{Z}$-algebra of $R$. Afterwards we use this result to prove that +the $\mathbb{Z}^2$-algebras as introduced in [arXiv:1607.08383] are +QGr-equivalent to a diagonal-like sub-$\mathbb{Z}$-algebra which is a +simultaneous noncommutative blow-up of a quadratic and a cubic Sklyanin +algebra. As such we link the noncommutative birational transformation and the +associated $\mathbb{Z}^2$-algebras as appearing in the work of Van den Bergh +and Presotto with the noncommutative blowups appearing in the work of Rogalski, +Sierra and Stafford. +" +Deep Convolutional Neural Networks for Anomaly Event Classification on Distributed Systems," The increasing popularity of server usage has brought a plenty of anomaly log +events, which have threatened a vast collection of machines. Recognizing and +categorizing the anomalous events thereby is a much salient work for our +systems, especially the ones generate the massive amount of data and harness it +for technology value creation and business development. To assist in focusing +on the classification and the prediction of anomaly events, and gaining +critical insights from system event records, we propose a novel log +preprocessing method which is very effective to filter abundant information and +retain critical characteristics. Additionally, a competitive approach for +automated classification of anomalous events detected from the distributed +system logs with the state-of-the-art deep (Convolutional Neural Network) +architectures is proposed in this paper. We measure a series of deep CNN +algorithms with varied hyper-parameter combinations by using standard +evaluation metrics, the results of our study reveals the advantages and +potential capabilities of the proposed deep CNN models for anomaly event +classification tasks on real-world systems. The optimal classification +precision of our approach is 98.14%, which surpasses the popular traditional +machine learning methods. +" +Group-sparse block PCA and explained variance," The paper addresses the simultneous determination of goup-sparse loadings by +block optimization, and the correlated problem of defining explained variance +for a set of non orthogonal components. We give in both cases a comprehensive +mathematical presentation of the problem, which leads to propose i) a new +formulation/algorithm for group-sparse block PCA and ii) a framework for the +definition of explained variance with the analysis of five definitions. The +numerical results i) confirm the superiority of block optimization over +deflation for the determination of group-sparse loadings, and the importance of +group information when available, and ii) show that ranking of algorithms +according to explained variance is essentially independant of the definition of +explained variance. These results lead to propose a new optimal variance as the +definition of choice for explained variance. +" +Multi-Frequency Phase Synchronization," We propose a novel formulation for phase synchronization -- the statistical +problem of jointly estimating alignment angles from noisy pairwise comparisons +-- as a nonconvex optimization problem that enforces consistency among the +pairwise comparisons in multiple frequency channels. Inspired by harmonic +retrieval in signal processing, we develop a simple yet efficient two-stage +algorithm that leverages the multi-frequency information. We demonstrate in +theory and practice that the proposed algorithm significantly outperforms +state-of-the-art phase synchronization algorithms, at a mild computational +costs incurred by using the extra frequency channels. We also extend our +algorithmic framework to general synchronization problems over compact Lie +groups. +" +A Bennequin-type inequality and combinatorial bounds," In this paper we provide a new Bennequin-type inequality for the Rasmussen- +Beliakova-Wehrli invariant, featuring the numerical transverse braid invariants +(the c-invariants) introduced by the author. From the Bennequin +type-inequality, and a combinatorial bound on the value of the c-invariants, we +deduce a new computable bound on the Rasmussen invariant. +" +Thermal field theory of bosonic gases with finite-range effective interaction," We study a dilute and ultracold Bose gas of interacting atoms by using an +effective field theory which takes account finite-range effects of the +inter-atomic potential. Within the formalism of functional integration from the +grand canonical partition function we derive beyond-mean-field analytical +results which depend on both scattering length and effective range of the +interaction. In particular, we calculate the equation of state of the bosonic +system as a function of these interaction parameters both at zero and finite +temperature including one-loop Gaussian fluctuation. In the case of zero-range +effective interaction we explicitly show that, due to quantum fluctuations, the +bosonic system is thermodynamically stable only for very small values of the +gas parameter. We find that a positive effective range above a critical +threshold is necessary to remove the thermodynamical instability of the uniform +configuration. Remarkably, also for relatively large values of the gas +parameter, our finite-range results are in quite good agreement with recent +zero-temperature Monte Carlo calculations obtained with hard-sphere bosons. +" +LtFi: Cross-technology Communication for RRM between LTE-U and IEEE 802.11," Cross-technology communication (CTC) was proposed in recent literature as a +way to exploit the opportunities of collaboration between heterogeneous +wireless technologies. This paper presents LtFi, a system which enables to +set-up a CTC between nodes of co-located LTE-U and WiFi networks. LtFi follows +a two-step approach: using the air-interface LTE-U BSs are broadcasting +connection and identification data to adjacent WiFi nodes, which is used to +create a bi-directional control channel over the wired Internet. This way LtFi +enables the development of advanced cross-technology interference and radio +resource management schemes between heterogeneous WiFi and LTE-U networks. +LtFi is of low complexity and fully compliant with LTE-U technology and works +on WiFi side with COTS hardware. It was prototypically implemented and +evaluated. Experimental results reveal that LtFi is able to reliably decoded +the data transmitted over the LtFi air-interface in a crowded wireless +environment at even very low LTE-U receive power levels of -92dBm. Moreover, +results from system-level simulations show that LtFi is able to accurately +estimate the set of interfering LTE-U BSs in a typical LTE-U multi-cell +environment. +" +Multi-step Off-policy Learning Without Importance Sampling Ratios," To estimate the value functions of policies from exploratory data, most +model-free off-policy algorithms rely on importance sampling, where the use of +importance sampling ratios often leads to estimates with severe variance. It is +thus desirable to learn off-policy without using the ratios. However, such an +algorithm does not exist for multi-step learning with function approximation. +In this paper, we introduce the first such algorithm based on +temporal-difference (TD) learning updates. We show that an explicit use of +importance sampling ratios can be eliminated by varying the amount of +bootstrapping in TD updates in an action-dependent manner. Our new algorithm +achieves stability using a two-timescale gradient-based TD update. A prior +algorithm based on lookup table representation called Tree Backup can also be +retrieved using action-dependent bootstrapping, becoming a special case of our +algorithm. In two challenging off-policy tasks, we demonstrate that our +algorithm is stable, effectively avoids the large variance issue, and can +perform substantially better than its state-of-the-art counterpart. +" +Atomic Ferris wheel beams," We study the generation of atom vortex beams in the case where an atomic +wave-packet, moving in free space, is diffracted from a properly tailored light +mask with a spiral transverse profile. We show how such a diffraction scheme +could lead to the production of an atomic Ferris wheel beam. +" +On the Limits of Learning Representations with Label-Based Supervision," Advances in neural network based classifiers have transformed automatic +feature learning from a pipe dream of stronger AI to a routine and expected +property of practical systems. Since the emergence of AlexNet every winning +submission of the ImageNet challenge has employed end-to-end representation +learning, and due to the utility of good representations for transfer learning, +representation learning has become as an important and distinct task from +supervised learning. At present, this distinction is inconsequential, as +supervised methods are state-of-the-art in learning transferable +representations. But recent work has shown that generative models can also be +powerful agents of representation learning. Will the representations learned +from these generative methods ever rival the quality of those from their +supervised competitors? In this work, we argue in the affirmative, that from an +information theoretic perspective, generative models have greater potential for +representation learning. Based on several experimentally validated assumptions, +we show that supervised learning is upper bounded in its capacity for +representation learning in ways that certain generative models, such as +Generative Adversarial Networks (GANs) are not. We hope that our analysis will +provide a rigorous motivation for further exploration of generative +representation learning. +" +Coset space construction and inverse Higgs phenomenon for the conformal group," It is shown that conformally invariant theories can be obtained within the +framework of the coset space construction. The corresponding technique is +applicable for the construction of representations of the unbroken conformal +group, as well as of a spontaneously broken one. A special role of the +""Nambu-Goldstone fields"" for special conformal transformations is clarified - +they ensure self-consistency of a theory by guaranteeing that discrete +symmetries are indeed symmetries of the theory. A generalization of the +developed construction to a special class of symmetry groups with a non-linear +realization of its discrete elements is given. Based on these results, the +usage of the inverse Higgs constraints for the conformal group undergoing +spontaneous symmetry breaking is questioned. +" +"A spatial predictive model for Malaria resurgence in central Greece integrating entomological, environmental and Social data"," Malaria constitutes an important cause of human mortality. After 2009 Greece +experienced a resurgence of malaria. Here, we develop a modelbased framework +that integrates entomological, geographical, social and environmental evidence +in order to guide the mosquito control efforts and apply this framework to data +from an entomological survey study conducted in Central Greece. Our results +indicate that malaria transmission risk in Greece is potentially substantial. +In addition, specific districts such as seaside, lakeside and rice field +regions appear to represent potential malaria hotspots in Central Greece. We +found that appropriate maps depicting the basic reproduction number, R0 , are +useful tools for informing policy makers on the risk of malaria resurgence and +can serve as a guide to inform recommendations regarding control measures. +" +Large-scale analysis of user exposure to online advertising in Facebook," Online advertising is the major source of income for a large portion of +Internet Services. There exists a body of literature aiming at optimizing ads +engagement, understanding the privacy and ethical implications of online +advertising, etc. However, to the best of our knowledge, no previous work +analyses at large scale the exposure of real users to online advertising. This +paper performs a comprehensive analysis of the exposure of users to ads and +advertisers using a dataset including more than 7M ads from 140K unique +advertisers delivered to more than 5K users that was collected between October +2016 and May 2018. The study focuses on Facebook, which is the second largest +advertising platform only to Google in terms of revenue, and accounts for more +than 2.2B monthly active users. Our analysis reveals that Facebook users are +exposed (in median) to 70 ads per week, which come from 12 advertisers. Ads +represent between 10% and 15% of all the information received in users' +newsfeed. A small increment of 1% in the portion of ads in the newsfeed could +roughly represent a revenue increase of 8.17M USD per week for Facebook. +Finally, we also reveal that Facebook users are overprofiled since in the best +case only 22.76% of the interests Facebook assigns to users for advertising +purpose are actually related to the ads those users receive. +" +Concept Formation and Dynamics of Repeated Inference in Deep Generative Models," Deep generative models are reported to be useful in broad applications +including image generation. Repeated inference between data space and latent +space in these models can denoise cluttered images and improve the quality of +inferred results. However, previous studies only qualitatively evaluated image +outputs in data space, and the mechanism behind the inference has not been +investigated. The purpose of the current study is to numerically analyze +changes in activity patterns of neurons in the latent space of a deep +generative model called a ""variational auto-encoder"" (VAE). What kinds of +inference dynamics the VAE demonstrates when noise is added to the input data +are identified. The VAE embeds a dataset with clear cluster structures in the +latent space and the center of each cluster of multiple correlated data points +(memories) is referred as the concept. Our study demonstrated that transient +dynamics of inference first approaches a concept, and then moves close to a +memory. Moreover, the VAE revealed that the inference dynamics approaches a +more abstract concept to the extent that the uncertainty of input data +increases due to noise. It was demonstrated that by increasing the number of +the latent variables, the trend of the inference dynamics to approach a concept +can be enhanced, and the generalization ability of the VAE can be improved. +" +Lp-estimates for the square root of elliptic systems with mixed boundary conditions," This article focuses on Lp-estimates for the square root of elliptic systems +of second order in divergence form on a bounded domain. We treat complex +bounded measurable coefficients and allow for mixed Dirichlet/Neumann boundary +conditions on domains beyond the Lipschitz class. If there is an associated +bounded semigroup on Lp0 , then we prove that the square root extends for all p +$\in$ (p0, 2) to an isomorphism between a closed subspace of W1p carrying the +boundary conditions and Lp. This result is sharp and extrapolates to exponents +slightly above 2. As a byproduct, we obtain an optimal p-interval for the +bounded H$\infty$-calculus on Lp. Estimates depend holomorphically on the +coefficients, thereby making them applicable to questions of non-autonomous +maximal regularity and optimal control. For completeness we also provide a +short summary on the Kato square root problem in L2 for systems with lower +order terms in our setting. +" +Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach," Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, +are by far the most wide-spread consumers of electricity. Normally the devices +are calibrated to provide the so-called bang-bang control of temperature -- +changing from on to off, and vice versa, depending on temperature. Aggregation +of a large group of similar devices into a statistical ensemble is considered, +where the devices operate following the same dynamics subject to stochastic +perturbations and randomized, Poisson on/off switching policy. We analyze, +using theoretical and computational tools of statistical physics, how the +ensemble relaxes to a stationary distribution and establish relation between +the relaxation and statistics of the probability flux, associated with devices' +cycling in the mixed (discrete, switch on/off, and continuous, temperature) +phase space. This allowed us to derive and analyze spectrum of the +non-equilibrium (detailed balance broken) statistical system and uncover how +switching policy affects oscillatory trend and speed of the relaxation. +Relaxation of the ensemble is of a practical interest because it describes how +the ensemble recovers from significant perturbations, e.g. forceful temporary +switching off aimed at utilizing flexibility of the ensemble in providing +""demand response"" services relieving consumption temporarily to balance larger +power grid. We discuss how the statistical analysis can guide further +development of the emerging demand response technology. +" +Automatic Document Image Binarization using Bayesian Optimization," Document image binarization is often a challenging task due to various forms +of degradation. Although there exist several binarization techniques in +literature, the binarized image is typically sensitive to control parameter +settings of the employed technique. This paper presents an automatic document +image binarization algorithm to segment the text from heavily degraded document +images. The proposed technique uses a two band-pass filtering approach for +background noise removal, and Bayesian optimization for automatic +hyperparameter selection for optimal results. The effectiveness of the proposed +binarization technique is empirically demonstrated on the Document Image +Binarization Competition (DIBCO) and the Handwritten Document Image +Binarization Competition (H-DIBCO) datasets. +" +Vibrationally resolved electronic spectra including vibrational pre-excitation: Theory and application to VIPER spectroscopy," Vibrationally resolved electronic absorption spectra including the effect of +vibrational pre-excitation are computed in order to interpret and predict +vibronic transitions that are probed in the Vibrationally Promoted Electronic +Resonance (VIPER) experiment [L. J. G. W. van Wilderen et al., Angew. Chem. +Int. Ed. 53, 2667 (2014)]. To this end, we employ time-independent and +time-dependent methods based on the evaluation of Franck-Condon overlap +integrals and Fourier transformation of time-domain wavepacket autocorrelation +functions, respectively. The time-independent approach uses a generalized +version of the FCclasses method [F. Santoro et al., J. Chem. Phys. 126, 084509 +(2007)]. In the time-dependent approach, autocorrelation functions are obtained +by wavepacket propagation and by evaluation of analytic expressions, within the +harmonic approximation including Duschinsky rotation effects. For several +medium-sized polyatomic systems, it is shown that selective pre-excitation of +particular vibrational modes leads to a red-shift of the low-frequency edge of +the electronic absorption spectrum, which is a prerequisite for the VIPER +experiment. This effect is typically most pronounced upon excitation of ring +distortion modes within an aromatic pi-system. Theoretical predictions as to +which modes show the strongest VIPER effect are found to be in excellent +agreement with experiment. +" +Modular Multi-Objective Deep Reinforcement Learning with Decision Values," In this work we present a method for using Deep Q-Networks (DQNs) in +multi-objective environments. Deep Q-Networks provide remarkable performance in +single objective problems learning from high-level visual state +representations. However, in many scenarios (e.g in robotics, games), the agent +needs to pursue multiple objectives simultaneously. We propose an architecture +in which separate DQNs are used to control the agent's behaviour with respect +to particular objectives. In this architecture we introduce decision values to +improve the scalarization of multiple DQNs into a single action. Our +architecture enables the decomposition of the agent's behaviour into +controllable and replaceable sub-behaviours learned by distinct modules. +Moreover, it allows to change the priorities of particular objectives +post-learning, while preserving the overall performance of the agent. To +evaluate our solution we used a game-like simulator in which an agent - +provided with high-level visual input - pursues multiple objectives in a 2D +world. +" +Coupled conditional backward sampling particle filter," Unbiased estimation for hidden Markov models has been recently proposed by +Jacob et al (to appear), using a coupling of two conditional particle filters +(CPFs). Unbiased estimation has many advantages, such as enabling the +construction of asymptotically exact confidence intervals and straightforward +parallelisation. In this work we propose a new coupling of two CPFs, for +unbiased estimation, that uses backward sampling steps, which is an important +efficiency enhancing technique in particle filtering. We show that this coupled +conditional backward sampling particle filter (CCBPF) algorithm has better +stability properties, in the sense that with fixed number of particles, the +coupling time in terms of iterations increases only linearly with respect to +the time horizon under a general (strong mixing) condition. In contrast, +current coupled CPFs require the particle number to increase with the horizon +length. An important corollary of our results is a new quantitative bound for +the convergence rate of the popular backward sampling conditional particle +filter. Previous theoretical results have not been able to demonstrate the +improvement brought by backward sampling to the CPF, whereas we provide rates +showing that backward sampling ensures that the CPF can remain effective with a +fixed number of particles independent of the time horizon. +" +A Bayesian Evidence Synthesis Approach to Estimate Disease Prevalence in Hard-To-Reach Populations: Hepatitis C in New York City," Existing methods to estimate the prevalence of chronic hepatitis C (HCV) in +New York City (NYC) are limited in scope and fail to assess hard-to-reach +subpopulations with highest risk such as injecting drug users (IDUs). To +address these limitations, we employ a Bayesian multi-parameter evidence +synthesis model to systematically combine multiple sources of data, account for +bias in certain data sources, and provide unbiased HCV prevalence estimates +with associated uncertainty. Our approach improves on previous estimates by +explicitly accounting for injecting drug use and including data from high-risk +subpopulations such as the incarcerated, and is more inclusive, utilizing ten +NYC data sources. In addition, we derive two new equations to allow age at +first injecting drug use data for former and current IDUs to be incorporated +into the Bayesian evidence synthesis, a first for this type of model. Our +estimated overall HCV prevalence as of 2012 among NYC adults aged 20-59 years +is 2.78% (95% CI 2.61-2.94%), which represents between 124,900 and 140,000 +chronic HCV cases. These estimates suggest that HCV prevalence in NYC is higher +than previously indicated from household surveys (2.2%) and the surveillance +system (2.37%), and that HCV transmission is increasing among young injecting +adults in NYC. An ancillary benefit from our results is an estimate of current +IDUs aged 20-59 in NYC: 0.58% or 27,600 individuals. +" +"Galaxies in the Illustris simulation as seen by the Sloan Digital Sky Survey - I: Bulge+disc decompositions, methods, and biases"," We present an image-based method for comparing the structural properties of +galaxies produced in hydrodynamical simulations to real galaxies in the Sloan +Digital Sky Survey. The key feature of our work is the introduction of +extensive observational realism, such as object crowding, noise and viewing +angle, to the synthetic images of simulated galaxies, so that they can be +fairly compared to real galaxy catalogs. We apply our methodology to the +dust-free synthetic image catalog of galaxies from the Illustris simulation at +$z=0$, which are then fit with bulge+disc models to obtain morphological +parameters. In this first paper in a series, we detail our methods, quantify +observational biases, and present publicly available bulge+disc decomposition +catalogs. We find that our bulge+disc decompositions are largely robust to the +observational biases that affect decompositions of real galaxies. However, we +identify a significant population of galaxies (roughly 30\% of the full sample) +in Illustris that are prone to internal segmentation, leading to systematically +reduced flux estimates by up to a factor of 6, smaller half-light radii by up +to a factor of $\sim$ 2, and generally erroneous bulge-to-total fractions of +(B/T)=0. +" +PubTree: A Hierarchical Search Tool for the MEDLINE Database," Keeping track of the ever-increasing body of scientific literature is an +escalating challenge. We present PubTree a hierarchical search tool that +efficiently searches the PubMed/MEDLINE dataset based upon a decision tree +constructed using >26 million abstracts. The tool is implemented as a webpage, +where users are asked a series of eighteen questions to locate pertinent +articles. The implementation of this hierarchical search tool highlights issues +endemic with document retrieval. However, the construction of this tree +indicates that with future developments hierarchical search could become an +effective tool (or adjunct) in the mining of biological literature. +" +Dimensional crossover and incipient quantum size effects in superconducting niobium nanofilms," Superconducting and normal state properties of sputtered Niobium nanofilms +have been systematically investigated, as a function of film thickness in a +d=9-90 nm range, on different substrates. The width of the +superconducting-to-normal transition for all films remained in few tens of mK, +thus remarkably narrow, confirming their high quality. We found that the +superconducting critical current density exhibits a pronounced maximum, three +times larger than its bulk value, for film thickness around 25 nm, marking the +3D-to-2D crossover. The extracted magnetic penetration depth shows a sizeable +enhancement for the thinnest films, aside the usual demagnetization effects. +Additional amplification effects of the superconducting properties have been +obtained in the case of sapphire substrates or squeezing the lateral size of +the nanofilms. For thickness close to 20 nm we also measured a doubled +perpendicular critical magnetic field compared to its saturation value for d>33 +nm, indicating shortening of the correlation length and the formation of small +Cooper pairs in the condensate. Our data analysis evidences an exciting +interplay between quantum-size and proximity effects together with +strong-coupling effects and importance of disorder in the thinnest films, +locating the ones with optimally enhanced critical properties close to the +BCS-BEC crossover regime. +" +Phase Diagram of Restricted Boltzmann Machines and Generalised Hopfield Networks with Arbitrary Priors," Restricted Boltzmann Machines are described by the Gibbs measure of a +bipartite spin glass, which in turn corresponds to the one of a generalised +Hopfield network. This equivalence allows us to characterise the state of these +systems in terms of retrieval capabilities, both at low and high load. We study +the paramagnetic-spin glass and the spin glass-retrieval phase transitions, as +the pattern (i.e. weight) distribution and spin (i.e. unit) priors vary +smoothly from Gaussian real variables to Boolean discrete variables. Our +analysis shows that the presence of a retrieval phase is robust and not +peculiar to the standard Hopfield model with Boolean patterns. The retrieval +region is larger when the pattern entries and retrieval units get more peaked +and, conversely, when the hidden units acquire a broader prior and therefore +have a stronger response to high fields. Moreover, at low load retrieval always +exists below some critical temperature, for every pattern distribution ranging +from the Boolean to the Gaussian case. +" +Causal nearest neighbor rules for optimal treatment regimes," The estimation of optimal treatment regimes is of considerable interest to +precision medicine. In this work, we propose a causal $k$-nearest neighbor +method to estimate the optimal treatment regime. The method roots in the +framework of causal inference, and estimates the causal treatment effects +within the nearest neighborhood. Although the method is simple, it possesses +nice theoretical properties. We show that the causal $k$-nearest neighbor +regime is universally consistent. That is, the causal $k$-nearest neighbor +regime will eventually learn the optimal treatment regime as the sample size +increases. We also establish its convergence rate. However, the causal +$k$-nearest neighbor regime may suffer from the curse of dimensionality, i.e. +performance deteriorates as dimensionality increases. To alleviate this +problem, we develop an adaptive causal $k$-nearest neighbor method to perform +metric selection and variable selection simultaneously. The performance of the +proposed methods is illustrated in simulation studies and in an analysis of a +chronic depression clinical trial. +" +Definitions of solutions to the IBVP for multiD scalar balance laws," We consider four definitions of solution to the initial-boundary value +problem for a scalar balance laws in several space dimensions. These +definitions are generalised to the same most general framework and then +compared. The first aim of this paper is to detail differences and analogies +among them. We focus then on the ways the boundary conditions are fulfilled +according to each definition, providing also connections among these various +modes. The main result is the proof of the equivalence among the presented +definitions of solution. +" +Post-selection estimation and testing following aggregated association tests," The practice of pooling several individual test statistics to form aggregate +tests is common in many statistical application where individual tests may be +underpowered. While selection by aggregate tests can serve to increase power, +the selection process invalidates the individual test-statistics, making it +difficult to identify the ones that drive the signal in follow-up inference. +Here, we develop a general approach for valid inference following selection by +aggregate testing. We present novel powerful post-selection tests for the +individual null hypotheses which are exact for the normal model and +asymptotically justified otherwise. Our approach relies on the ability to +characterize the distribution of the individual test statistics after +conditioning on the event of selection. We provide efficient algorithms for +estimation of the post-selection maximum-likelihood estimates and suggest +confidence intervals which rely on a novel switching regime for good coverage +guarantees. We validate our methods via comprehensive simulation studies and +apply them to data from the Dallas Heart Study, demonstrating that single +variant association discovery following selection by an aggregated test is +indeed possible in practice. +" +GANs for Medical Image Analysis," Generative Adversarial Networks (GANs) and their extensions have carved open +many exciting ways to tackle well known and challenging medical image analysis +problems such as medical image de-noising, reconstruction, segmentation, data +simulation, detection or classification. Furthermore, their ability to +synthesize images at unprecedented levels of realism also gives hope that the +chronic scarcity of labeled data in the medical field can be resolved with the +help of these generative models. In this review paper, a broad overview of +recent literature on GANs for medical applications is given, the shortcomings +and opportunities of the proposed methods are thoroughly discussed and +potential future work is elaborated. We review the most relevant papers +published until the submission date. For quick access, important details such +as the underlying method, datasets and performance are tabulated. An +interactive visualization categorizes all papers to keep the review alive. +" +NFL Injuries Before and After the 2011 Collective Bargaining Agreement (CBA)," The National Football League's (NFL) 2011 collective bargaining agreement +(CBA) with its players placed a number of contact and quantity limitations on +practices and workouts. Some coaches and others have expressed a concern that +this has led to poor conditioning and a subsequent increase in injuries. We +sought to assess whether the 2011 CBA's practice restrictions affected the +number of overall, conditioning-dependent, and/or non-conditioning-dependent +injuries in the NFL or the number of games missed due to those injuries. The +study population was player-seasons from 2007-2016. We included regular season, +non-illness, non-head, game-loss injuries. Injuries were identified using a +database from Football Outsiders. The primary outcomes were overall, +conditioning-dependent and non-conditioning-dependent injury counts by season. +We examined time trends in injury counts before (2007-2010) and after +(2011-2016) the CBA using a Poisson interrupted time series model. The number +of game-loss regular season, non-head, non-illness injuries grew from 701 in +2007 to 804 in 2016 (15% increase). The number of regular season weeks missed +exhibited a similar increase. Conditioning-dependent injuries increased from +197 in 2007 to 271 in 2011 (38% rise), but were lower and remained relatively +unchanged at 220-240 injuries per season thereafter. Non-conditioning injuries +decreased by 37% in the first three years of the new CBA before returning to +historic levels in 2014-2016. Poisson models for all, conditioning-dependent, +and non-conditioning-dependent game-loss injury counts did not show +statistically significant or meaningful detrimental changes associated with the +CBA. We did not observe an increase in injuries following the 2011 CBA. Other +concurrent injury-related rule and regulation changes limit specific causal +inferences about the practice restrictions, however. +" +"Structured and Unstructured Outlier Identification for Robust PCA: A Non iterative, Parameter free Algorithm"," Robust PCA, the problem of PCA in the presence of outliers has been +extensively investigated in the last few years. Here we focus on Robust PCA in +the outlier model where each column of the data matrix is either an inlier or +an outlier. Most of the existing methods for this model assumes either the +knowledge of the dimension of the lower dimensional subspace or the fraction of +outliers in the system. However in many applications knowledge of these +parameters is not available. Motivated by this we propose a parameter free +outlier identification method for robust PCA which a) does not require the +knowledge of outlier fraction, b) does not require the knowledge of the +dimension of the underlying subspace, c) is computationally simple and fast d) +can handle structured and unstructured outliers. Further, analytical guarantees +are derived for outlier identification and the performance of the algorithm is +compared with the existing state of the art methods in both real and synthetic +data for various outlier structures. +" +Foveated Video Streaming for Cloud Gaming," Good user experience with interactive cloud-based multimedia applications, +such as cloud gaming and cloud-based VR, requires low end-to-end latency and +large amounts of downstream network bandwidth at the same time. In this paper, +we present a foveated video streaming system for cloud gaming. The system +adapts video stream quality by adjusting the encoding parameters on the fly to +match the player's gaze position. We conduct measurements with a prototype that +we developed for a cloud gaming system in conjunction with eye tracker +hardware. Evaluation results suggest that such foveated streaming can reduce +bandwidth requirements by even more than 50% depending on parametrization of +the foveated video coding and that it is feasible from the latency perspective. +" +Data-driven Analysis of Complex Networks and their Model-generated Counterparts," Data-driven analysis of complex networks has been in the focus of research +for decades. An important question is to discover the relation between various +network characteristics in real-world networks and how these relationships vary +across network domains. A related research question is to study how well the +network models can capture the observed relations between the graph metrics. In +this paper, we apply statistical and machine learning techniques to answer the +aforementioned questions. We study 400 real-world networks along with 2400 +networks generated by five frequently used network models with previously +fitted parameters to make the generated graphs as similar to the real network +as possible. We find that the correlation profiles of the structural measures +significantly differ across network domains and the domain can be efficiently +determined using a small selection of graph metrics. The goodness-of-fit of the +network models and the best performing models themselves highly depend on the +domains. Using machine learning techniques, it turned out to be relatively easy +to decide if a network is real or model-generated. We also investigate what +structural properties make it possible to achieve a good accuracy, i.e. what +features the network models cannot capture. +" +A Short Note on Collecting Dependently Typed Values," Within dependently typed languages, such as Idris, types can depend on +values. This dependency, however, can limit the collection of items in standard +containers: all elements must have the same type, and as such their types must +contain the same values. We present two dependently typed data structures for +collecting dependent types: \texttt{DList} and \texttt{PList}. Use of these new +data structures allow for the creation of single succinct inductive ADT whose +constructions were previously verbose and split across many data structures. +" +Dynamic Bridge-Finding in $\tilde{O}(\log ^2 n)$ Amortized Time," We present a deterministic fully-dynamic data structure for maintaining +information about the bridges in a graph. We support updates in +$\tilde{O}((\log n)^2)$ amortized time, and can find a bridge in the component +of any given vertex, or a bridge separating any two given vertices, in $O(\log +n / \log \log n)$ worst case time. Our bounds match the current best for bounds +for deterministic fully-dynamic connectivity up to $\log\log n$ factors. The +previous best dynamic bridge finding was an $\tilde{O}((\log n)^3)$ amortized +time algorithm by Thorup [STOC2000], which was a bittrick-based improvement on +the $O((\log n)^4)$ amortized time algorithm by Holm et al.[STOC98, JACM2001]. +Our approach is based on a different and purely combinatorial improvement of +the algorithm of Holm et al., which by itself gives a new combinatorial +$\tilde{O}((\log n)^3)$ amortized time algorithm. Combining it with Thorup's +bittrick, we get down to the claimed $\tilde{O}((\log n)^2)$ amortized time. +Essentially the same new trick can be applied to the biconnectivity data +structure from [STOC98, JACM2001], improving the amortized update time to +$\tilde{O}((\log n)^3)$. +We also offer improvements in space. We describe a general trick which +applies to both of our new algorithms, and to the old ones, to get down to +linear space, where the previous best use $O(m + n\log n\log\log n)$. Finally, +we show how to obtain $O(\log n/\log \log n)$ query time, matching the optimal +trade-off between update and query time. +Our result yields an improved running time for deciding whether a unique +perfect matching exists in a static graph. +" +Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning," Bayesian neural networks with latent variables are scalable and flexible +probabilistic models: They account for uncertainty in the estimation of the +network weights and, by making use of latent variables, can capture complex +noise patterns in the data. We show how to extract and decompose uncertainty +into epistemic and aleatoric components for decision-making purposes. This +allows us to successfully identify informative points for active learning of +functions with heteroscedastic and bimodal noise. Using the decomposition we +further define a novel risk-sensitive criterion for reinforcement learning to +identify policies that balance expected cost, model-bias and noise aversion. +" +Standard errors for regression on relational data with exchangeable errors," Relational arrays represent interactions or associations between pairs of +actors, often in varied contexts or over time. Such data appear as, for +example, trade flows between countries, financial transactions between +individuals, contact frequencies between school children in classrooms, and +dynamic protein-protein interactions. This paper proposes and evaluates a new +class of parameter standard errors for models that represent elements of a +relational array as a linear function of observable covariates. Uncertainty +estimates for regression coefficients must account for both heterogeneity +across actors and dependence arising from relations involving the same actor. +Existing estimators of parameter standard errors that recognize such relational +dependence rely on estimating extremely complex, heterogeneous structure across +actors. Leveraging an exchangeability assumption, we derive parsimonious +standard error estimators that pool information across actors and are +substantially more accurate than existing estimators in a variety of settings. +This exchangeability assumption is pervasive in network and array models in the +statistics literature, but not previously considered when adjusting for +dependence in a regression setting with relational data. We show that our +estimator is consistent and demonstrate improvements in inference through +simulation and a data set involving international trade. +" +Calculating normal tissue complication probabilities and probabilities of complication-free tumour control from stochastic models of population dynamics," We use a stochastic birth-death model for a population of cells to estimate +the normal tissue complication probability (NTCP) under a particular +radiotherapy protocol. We specifically allow for interaction between cells, via +a nonlinear logistic growth model. To capture some of the effects of intrinsic +noise in the population we develop several approximations of NTCP, using +Kramers-Moyal expansion techniques. These approaches provide an approximation +to the first and second moments of a general first-passage time problem in the +limit of large, but finite populations. We use this method to study NTCP in a +simple model of normal cells and in a model of normal and damaged cells. We +also study a combined model of normal tissue cells and tumour cells. Based on +existing methods to calculate tumour control probabilities, and our procedure +to approximate NTCP, we estimate the probability of complication free tumour +control. +" +Pathwise Derivatives Beyond the Reparameterization Trick," We observe that gradients computed via the reparameterization trick are in +direct correspondence with solutions of the transport equation in the formalism +of optimal transport. We use this perspective to compute (approximate) pathwise +gradients for probability distributions not directly amenable to the +reparameterization trick: Gamma, Beta, and Dirichlet. We further observe that +when the reparameterization trick is applied to the Cholesky-factorized +multivariate Normal distribution, the resulting gradients are suboptimal in the +sense of optimal transport. We derive the optimal gradients and show that they +have reduced variance in a Gaussian Process regression task. We demonstrate +with a variety of synthetic experiments and stochastic variational inference +tasks that our pathwise gradients are competitive with other methods. +" +Network-size independent covering number bounds for deep networks," We give a covering number bound for deep learning networks that is +independent of the size of the network. The key for the simple analysis is that +for linear classifiers, rotating the data doesn't affect the covering number. +Thus, we can ignore the rotation part of each layer's linear transformation, +and get the covering number bound by concentrating on the scaling part. +" +NIF: A Framework for Quantifying Neural Information Flow in Deep Networks," In this paper, we present a new approach to interpreting deep learning +models. More precisely, by coupling mutual information with network science, we +explore how information flows through feed forward networks. We show that +efficiently approximating mutual information via the dual representation of +Kullback-Leibler divergence allows us to create an information measure that +quantifies how much information flows between any two neurons of a deep +learning model. To that end, we propose NIF, Neural Information Flow, a new +metric for codifying information flow which exposes the internals of a deep +learning model while providing feature attributions. +" +On functions given by algebraic power series over Henselian valued fields," This paper provides, over Henselian valued fields, some theorems on implicit +function and of Artin--Mazur on algebraic power series. Also discussed are +certain versions of the theorems of Abhyankar--Jung and Newton--Puiseux. The +latter is used in analysis of functions of one variable, definable in the +language of Denef--Pas, to obtain a theorem on existence of the limit, proven +over rank one valued fields in one of our recent papers. This result along with +the technique of fiber shrinking (developed there over rank one valued fields) +were, in turn, two basic tools in the proof of the closedness theorem. +" +What can one learn about material structure given a single first-principles calculation?," We extract a variable $X$ from electron orbitals $\Psi_{n\bf{k}}$ and +energies $E_{n\bf{k}}$ in the parent high-symmetry structure of a wide range of +complex oxides: perovskites, rutiles, pyrochlores, and cristobalites. Even +though calculation was done only in the parent structure, with no distortions, +we show that $X$ dictates material's true ground state structure. We propose +using Wannier functions to extract concealed variables such as $X$ both for +material structure prediction and for high-throughput approaches. +" +xUnit: Learning a Spatial Activation Function for Efficient Image Restoration," In recent years, deep neural networks (DNNs) achieved unprecedented +performance in many low-level vision tasks. However, state-of-the-art results +are typically achieved by very deep networks, which can reach tens of layers +with tens of millions of parameters. To make DNNs implementable on platforms +with limited resources, it is necessary to weaken the tradeoff between +performance and efficiency. In this paper, we propose a new activation unit, +which is particularly suitable for image restoration problems. In contrast to +the widespread per-pixel activation units, like ReLUs and sigmoids, our unit +implements a learnable nonlinear function with spatial connections. This +enables the net to capture much more complex features, thus requiring a +significantly smaller number of layers in order to reach the same performance. +We illustrate the effectiveness of our units through experiments with +state-of-the-art nets for denoising, de-raining, and super resolution, which +are already considered to be very small. With our approach, we are able to +further reduce these models by nearly 50% without incurring any degradation in +performance. +" +"Multiwavelength study of VHE emission from Markarian 501 using TACTIC observations during April-May, 2012"," We have observed Markarian 501 in Very High Energy (VHE) gamma-ray wavelength +band for 70.6 hours from 15 April to 30 May, 2012 using TACTIC telescope. +Detailed analysis of $\sim$66.3 hours of clean data revealed the presence of a +TeV $\gamma$-ray signal (686$\pm$77 $\gamma$-ray events) from the source +direction with a statistical significance of 8.89$\sigma$ above 850 GeV. +Further, a total of 375 $\pm$ 47 $\gamma$-ray like events were detected in 25.2 +hours of observation from 22 - 27 May, 2012 with a statistical significance of +8.05$\sigma$ indicating that the source has possibly switched over to a +relatively high gamma-ray emission state. We have derived time-averaged +differential energy spectrum of the state in the energy range 850 GeV - 17.24 +TeV which fits well with a power law function of the form +$dF/dE=f_{0}E^{-\Gamma}$ with $f_{0}= (2.27 \pm 0.38) \times 10 ^{-11} $ +photons cm$^{-2}$ s$^{-1}$ TeV$^{-1}$ and $\Gamma=2.57 \pm 0.15$. In order to +investigate the source state, we have also used almost simultaneous +multiwavelength observations viz: high energy data collected by +$\it{Fermi}$-LAT, X-ray data collected by $\it{Swift}$-XRT and MAXI, optical +and UV data collected by $\it{Swift}$-UVOT, and radio data collected by OVRO, +and reconstructed broad-band Spectral Energy Distribution (SED). The obtained +SED supports leptonic model (homogeneous single zone) for VHE gamma-ray +emission involving synchrotron and synchrotron self Compton (SSC) processes. +" +"Sparsity enforcing priors in inverse problems via Normal variance mixtures: model selection, algorithms and applications"," The sparse structure of the solution for an inverse problem can be modelled +using different sparsity enforcing priors when the Bayesian approach is +considered. Analytical expression for the unknowns of the model can be obtained +by building hierarchical models based on sparsity enforcing distributions +expressed via conjugate priors. We consider heavy tailed distributions with +this property: the Student-t distribution, which is expressed as a Normal scale +mixture, with the mixing distribution the Inverse Gamma distribution, the +Laplace distribution, which can also be expressed as a Normal scale mixture, +with the mixing distribution the Exponential distribution or can be expressed +as a Normal inverse scale mixture, with the mixing distribution the Inverse +Gamma distribution, the Hyperbolic distribution, the Variance-Gamma +distribution, the Normal-Inverse Gaussian distribution, all three expressed via +conjugate distributions using the Generalized Hyperbolic distribution. For all +distributions iterative algorithms are derived based on hierarchical models +that account for the uncertainties of the forward model. For estimation, +Maximum A Posterior (MAP) and Posterior Mean (PM) via variational Bayesian +approximation (VBA) are used. The performances of resulting algorithm are +compared in applications in 3D computed tomography (3D-CT) and chronobiology. +Finally, a theoretical study is developed for comparison between sparsity +enforcing algorithms obtained via the Bayesian approach and the sparsity +enforcing algorithms issued from regularization techniques, like LASSO and some +others. +" +Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition," Spectral decomposition of the Koopman operator is attracting attention as a +tool for the analysis of nonlinear dynamical systems. Dynamic mode +decomposition is a popular numerical algorithm for Koopman spectral analysis; +however, we often need to prepare nonlinear observables manually according to +the underlying dynamics, which is not always possible since we may not have any +a priori knowledge about them. In this paper, we propose a fully data-driven +method for Koopman spectral analysis based on the principle of learning Koopman +invariant subspaces from observed data. To this end, we propose minimization of +the residual sum of squares of linear least-squares regression to estimate a +set of functions that transforms data into a form in which the linear +regression fits well. We introduce an implementation with neural networks and +evaluate performance empirically using nonlinear dynamical systems and +applications. +" +Structure and Content of the Visible Darknet," In this paper, we analyze the topology and the content found on the +""darknet"", the set of websites accessible via Tor. We created a darknet spider +and crawled the darknet starting from a bootstrap list by recursively following +links. We explored the whole connected component of more than 34,000 hidden +services, of which we found 10,000 to be online. Contrary to folklore belief, +the visible part of the darknet is surprisingly well-connected through hub +websites such as wikis and forums. We performed a comprehensive categorization +of the content using supervised machine learning. We observe that about half of +the visible dark web content is related to apparently licit activities based on +our classifier. A significant amount of content pertains to software +repositories, blogs, and activism-related websites. Among unlawful hidden +services, most pertain to fraudulent websites, services selling counterfeit +goods, and drug markets. +" +Pro-arrhythmogenic effects of heterogeneous tissue curvature: A suggestion for role of left atrial appendage in atrial fibrillation," Background: The arrhythmogenic role of atrial complex morphology has not yet +been clearly elucidated. We hypothesized that bumpy tissue geometry can induce +action potential duration (APD) dispersion and wavebreak in atrial fibrillation +(AF). +Methods and Results: We simulated 2D-bumpy atrial model by varying the degree +of bumpiness, and 3D-left atrial (LA) models integrated by LA computed +tomographic (CT) images taken from 14 patients with persistent AF. We also +analyzed wave-dynamic parameters with bipolar electrograms during AF and +compared them with LA-CT geometry in 30 patients with persistent AF. In +2D-bumpy model, APD dispersion increased (p<0.001) and wavebreak occurred +spontaneously when the surface bumpiness was higher, showing phase +transition-like behavior (p<0.001). Bumpiness gradient 2D-model showed that +spiral wave drifted in the direction of higher bumpiness, and phase singularity +(PS) points were mostly located in areas with higher bumpiness. In 3D-LA model, +PS density was higher in LA appendage (LAA) compared to other LA parts +(p<0.05). In 30 persistent AF patients, the surface bumpiness of LAA was +5.8-times that of other LA parts (p<0.001), and exceeded critical bumpiness to +induce wavebreak. Wave dynamics complexity parameters were consistently +dominant in LAA (p<0.001). +Conclusion: The bumpy tissue geometry promotes APD dispersion, wavebreak, and +spiral wave drift in in silico human atrial tissue, and corresponds to clinical +electro-anatomical maps. +" +Fragmentation of phase-fluctuating condensates," We study zero-temperature quantum phase fluctuations in harmonically trapped +one-dimensional interacting Bose gases, using the self-consistent +multiconfigurational time-dependent Hartree method. In a regime of mesoscopic +particle numbers and moderate contact couplings, it is shown that the +phase-fluctuating condensate is properly described as a fragmented condensate. +In addition, we demonstrate that the spatial dependence of the amplitude of +phase fluctuations significantly deviates from what is obtained in Bogoliubov +theory. Our results can be verified in currently available experiments. They +therefore provide an opportunity both to experimentally benchmark the +multiconfigurational time-dependent Hartree method, as well as to directly +observe, for the first time, the quantum many-body phenomenon of fragmentation +in single traps. +" +Thermoelectric transport parallel to the planes in a multilayered Mott-Hubbard heterostructure," We present a theory for charge and heat transport parallel to the interfaces +of a multilayer (ML) in which the interfacing gives rise the redistribution of +the electronic charges. The ensuing electrical field couples self-consistently +to the itinerant electrons, so that the properties of the ML crucially depend +on an interplay between the on-site Coulomb forces and the long range +electrostatic forces. The ML is described by the Falicov-Kimball model and the +self-consistent solution is obtained by iterating simultaneously the DMFT and +the Poisson equations. This yields the reconstructed charge profile, the +electrical potential, the planar density of states, the transport function, and +the transport coefficients of the device. +We find that a heterostructure built of two Mott-Hubbard insulators exhibits, +in a large temperature interval, a linear conductivity and a large +temperature-independent thermopower. The charge and energy currents are +confined to the central part of the ML. Our results indicate that correlated +multilayers have the potential for applications; by tuning the band shift and +the Coulomb correlation on the central planes, we can bring the chemical +potential in the immediate proximity of the Mott-Hubbard gap edge and optimize +the transport properties of the device. In such a heterostructure, a small gate +voltage can easily induce a MI transition. This switching does not involve the +diffusion of electrons over macroscopic distances and it is much faster than in +ordinary semiconductors. Furthermore, the right combination of strongly +correlated materials with small ZT can produce, theoretically at least, a +heterostructure with a large ZT. +" +Mining Device-Specific Apps Usage Patterns from Large-Scale Android Users," When smartphones, applications (a.k.a, apps), and app stores have been widely +adopted by the billions, an interesting debate emerges: whether and to what +extent do device models influence the behaviors of their users? The answer to +this question is critical to almost every stakeholder in the smartphone app +ecosystem, including app store operators, developers, end-users, and network +providers. To approach this question, we collect a longitudinal data set of app +usage through a leading Android app store in China, called Wandoujia. The data +set covers the detailed behavioral profiles of 0.7 million (761,262) unique +users who use 500 popular types of Android devices and about 0.2 million +(228,144) apps, including their app management activities, daily network access +time, and network traffic of apps. We present a comprehensive study on +investigating how the choices of device models affect user behaviors such as +the adoption of app stores, app selection and abandonment, data plan usage, +online time length, the tendency to use paid/free apps, and the preferences to +choosing competing apps. Some significant correlations between device models +and app usage are derived, leading to important findings on the various user +behaviors. For example, users owning different device models have a substantial +diversity of selecting competing apps, and users owning lower-end devices spend +more money to purchase apps and spend more time under cellular network. +" +Homotopical Stable Ranks for Certain C*-algebras," We study the general and connected stable ranks for C*-algebras. We estimate +these ranks for pullbacks of C*-algebras, and for tensor products by +commutative C*-algebras. Finally, we apply these results to determine these +ranks for certain commutative C*-algebras, and non-commutative CW-complexes. +" +Covariantly functorial wrapped Floer theory on Liouville sectors," We introduce a class of Liouville manifolds with boundary which we call +Liouville sectors. We define the wrapped Fukaya category, symplectic +cohomology, and the open-closed map for Liouville sectors, and we show that +these invariants are covariantly functorial with respect to inclusions of +Liouville sectors. From this foundational setup, a local-to-global principle +for Abouzaid's generation criterion follows. +" +Robust Sparse Covariance Estimation by Thresholding Tyler's M-Estimator," Estimating a high-dimensional sparse covariance matrix from a limited number +of samples is a fundamental problem in contemporary data analysis. Most +proposals to date, however, are not robust to outliers or heavy tails. Towards +bridging this gap, in this work we consider estimating a sparse shape matrix +from $n$ samples following a possibly heavy tailed elliptical distribution. We +propose estimators based on thresholding either Tyler's M-estimator or its +regularized variant. We derive bounds on the difference in spectral norm +between our estimators and the shape matrix in the joint limit as the dimension +$p$ and sample size $n$ tend to infinity with $p/n\to\gamma>0$. These bounds +are minimax rate-optimal. Results on simulated data support our theoretical +analysis. +" +Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction," As algorithms are increasingly used to make important decisions that affect +human lives, ranging from social benefit assignment to predicting risk of +criminal recidivism, concerns have been raised about the fairness of +algorithmic decision making. Most prior works on algorithmic fairness +normatively prescribe how fair decisions ought to be made. In contrast, here, +we descriptively survey users for how they perceive and reason about fairness +in algorithmic decision making. +A key contribution of this work is the framework we propose to understand why +people perceive certain features as fair or unfair to be used in algorithms. +Our framework identifies eight properties of features, such as relevance, +volitionality and reliability, as latent considerations that inform people's +moral judgments about the fairness of feature use in decision-making +algorithms. We validate our framework through a series of scenario-based +surveys with 576 people. We find that, based on a person's assessment of the +eight latent properties of a feature in our exemplar scenario, we can +accurately (> 85%) predict if the person will judge the use of the feature as +fair. +Our findings have important implications. At a high-level, we show that +people's unfairness concerns are multi-dimensional and argue that future +studies need to address unfairness concerns beyond discrimination. At a +low-level, we find considerable disagreements in people's fairness judgments. +We identify root causes of the disagreements, and note possible pathways to +resolve them. +" +Genetic Algorithm Based Floor Planning System," Genetic Algorithms are widely used in many different optimization problems +including layout design. The layout of the shelves play an important role in +the total sales metrics for superstores since this affects the customers' +shopping behaviour. This paper employed a genetic algorithm based approach to +design shelf layout of superstores. The layout design problem was tackled by +using a novel chromosome representation which takes many different parameters +to prevent dead-ends and improve shelf visibility into consideration. Results +show that the approach can produce reasonably good layout designs in very short +amounts of time. +" +"On SDEs with Lipschitz coefficients, driven by continuous, model-free price paths"," Using similar assumptions as in Revuz and Yor's book we prove the existence +and uniqueness of the solutions of SDEs with Lipschitz coefficients, driven by +continuous, model-free price paths. The main tool in our reasonings is a +model-free version of the Burkholder-Davis-Gundy inequality for integrals +driven by model-free, continuous price paths. +" +Stability of receding traveling waves for a fourth order degenerate parabolic free boundary problem," Consider the thin-film equation $h_t + \left(h h_{yyy}\right)_y = 0$ with a +zero contact angle at the free boundary, that is, at the triple junction where +liquid, gas, and solid meet. Previous results on stability and well-posedness +of this equation have focused on perturbations of equilibrium-stationary or +self-similar profiles, the latter eventually wetting the whole surface. These +solutions have their counterparts for the second-order porous-medium equation +$h_t - (h^m)_{yy} = 0$, where $m > 1$ is a free parameter. Both porous-medium +and thin-film equation degenerate as $h \searrow 0$, but the porous-medium +equation additionally fulfills a comparison principle while the thin-film +equation does not. +In this note, we consider traveling waves $h = \frac V 6 x^3 + \nu x^2$ for +$x \ge 0$, where $x = y-V t$ and $V, \nu \ge 0$ are free parameters. These +traveling waves are receding and therefore describe de-wetting, a phenomenon +genuinely linked to the fourth-order nature of the thin-film equation and not +encountered in the porous-medium case as it violates the comparison principle. +The linear stability analysis leads to a linear fourth-order +degenerate-parabolic operator for which we prove maximal-regularity estimates +to arbitrary orders of the expansion in $x$ in a right-neighborhood of the +contact line $x = 0$. This leads to a well-posedness and stability result for +the corresponding nonlinear equation. As the linearized evolution has different +scaling as $x \searrow 0$ and $x \to \infty$, the analysis is more intricate +than in related previous works. We anticipate that our approach is a natural +step towards investigating other situations in which the comparison principle +is violated, such as droplet rupture. +" +Group Importance Sampling for Particle Filtering and MCMC," Bayesian methods and their implementations by means of sophisticated Monte +Carlo techniques have become very popular in signal processing over the last +years. Importance Sampling (IS) is a well-known Monte Carlo technique that +approximates integrals involving a posterior distribution by means of weighted +samples. In this work, we study the assignation of a single weighted sample +which compresses the information contained in a population of weighted samples. +Part of the theory that we present as Group Importance Sampling (GIS) has been +employed implicitly in different works in the literature. The provided analysis +yields several theoretical and practical consequences. For instance, we discuss +the application of GIS into the Sequential Importance Resampling framework and +show that Independent Multiple Try Metropolis schemes can be interpreted as a +standard Metropolis-Hastings algorithm, following the GIS approach. We also +introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS. +The first one, named Group Metropolis Sampling method, produces a Markov chain +of sets of weighted samples. All these sets are then employed for obtaining a +unique global estimator. The second one is the Distributed Particle +Metropolis-Hastings technique, where different parallel particle filters are +jointly used to drive an MCMC algorithm. Different resampled trajectories are +compared and then tested with a proper acceptance probability. The novel +schemes are tested in different numerical experiments such as learning the +hyperparameters of Gaussian Processes, two localization problems in a wireless +sensor network (with synthetic and real data) and the tracking of vegetation +parameters given satellite observations, where they are compared with several +benchmark Monte Carlo techniques. Three illustrative Matlab demos are also +provided. +" +Double Temporal Sparsity Based Accelerated Reconstruction in Compressed Sensing fMRI," A number of reconstruction methods have been proposed recently for +accelerated functional Magnetic Resonance Imaging (fMRI) data collection. +However, existing methods suffer with the challenge of greater artifacts at +high acceleration factors. This paper addresses the issue of accelerating fMRI +collection via undersampled k-space measurements combined with the proposed +Double Temporal Sparsity based Reconstruction (DTSR) method with the l1 -l1 +norm constraint. The robustness of the proposed DTSR method has been thoroughly +evaluated both at the subject level and at the group level on real fMRI data. +Results are presented at various acceleration factors. Quantitative analysis in +terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative +analysis in terms of reproducibility of brain Resting State Networks (RSNs) +demonstrate that the proposed method is accurate and robust. In addition, the +proposed DTSR method preserves brain networks that are important for studying +fMRI data. Compared to the existing accelerated fMRI reconstruction methods, +the DTSR method shows promising potential with an improvement of 10-12dB in +PSNR with acceleration factors upto 3.5. Simulation results on real data +demonstrate that DTSR method can be used to acquire accelerated fMRI with +accurate detection of RSNs. +" +A continuum model for distributions of dislocations incorporating short-range interactions," Dislocations are the main carriers of the permanent deformation of crystals. +For simulations of engineering applications, continuum models where material +microstructures are represented by continuous density distributions of +dislocations are preferred. It is challenging to capture in the continuum model +the short-range dislocation interactions, which vanish after the standard +averaging procedure from discrete dislocation models. In this study, we +consider systems of parallel straight dislocation walls and develop continuum +descriptions for the short-range interactions of dislocations by using +asymptotic analysis. The obtained continuum short-range interaction formulas +are incorporated in the continuum model for dislocation dynamics based on a +pair of dislocation density potential functions that represent continuous +distributions of dislocations. This derived continuum model is able to describe +the anisotropic dislocation interaction and motion. Mathematically, these +short-range interaction terms ensure strong stability property of the continuum +model that is possessed by the discrete dislocation dynamics model. The derived +continuum model is validated by comparisons with the discrete dislocation +dynamical simulation results. +" +Phase transition and power-law coarsening in Ising-doped voter model," We examine an opinion formation model, which is a mixture of Voter and Ising +agents. Numerical simulations show that even a very small fraction ($\sim 1\%$) +of the Ising agents drastically changes the behaviour of the Voter model. The +Voter agents act as a medium, which correlates sparsely dispersed Ising agents, +and the resulting ferromagnetic ordering persists up to a certain temperature. +Upon addition of the Ising agents, a logarithmically slow coarsening of the +Voter model ($d=2$), or its active steady state ($d=3$), change into an +Ising-type power-law coarsening. +" +A probabilistic data-driven model for planar pushing," This paper presents a data-driven approach to model planar pushing +interaction to predict both the most likely outcome of a push and its expected +variability. The learned models rely on a variation of Gaussian processes with +input-dependent noise called Variational Heteroscedastic Gaussian processes +(VHGP) that capture the mean and variance of a stochastic function. We show +that we can learn accurate models that outperform analytical models after less +than 100 samples and saturate in performance with less than 1000 samples. We +validate the results against a collected dataset of repeated trajectories, and +use the learned models to study questions such as the nature of the variability +in pushing, and the validity of the quasi-static assumption. +" +Dual constant-flux energy cascades to both large scales and small scales," In this paper, we present an overview of concepts and data concerning inverse +cascades of excitation towards scales larger than the forcing scale in a +variety of contexts, from two-dimensional fluids and wave turbulence, to +geophysical flows in the presence of rotation and stratification. We briefly +discuss the role of anisotropy in the occurrence and properties of such +cascades. We then show that the cascade of some invariant, for example the +total energy, may be transferred through nonlinear interactions to both the +small scales and the large scales, with in each case a constant flux. This is +in contrast to the classical picture, and we illustrate such a dual cascade in +the context of atmospheric and oceanic observations, direct numerical +simulations and modeling. We also show that this dual cascade of total energy +can in fact be decomposed in some cases into separate cascades of the kinetic +and potential energies, provided the Froude and Rossby numbers are small +enough. In all cases, the potential energy flux remains small, of the order of +10% or less relative to the kinetic energy flux. Finally, we demonstrate that, +in the small-scale inertial range, approximate equipartition between potential +and kinetic modes is obtained, leading to an energy ratio close to one, with +strong departure at large scales due to the dominant kinetic energy inverse +cascade and piling-up at the lowest spatial frequency, and at small scales due +to unbalanced dissipation processes, even though the Prandtl number is equal to +one. +" +Worst-case Optimal Submodular Extensions for Marginal Estimation," Submodular extensions of an energy function can be used to efficiently +compute approximate marginals via variational inference. The accuracy of the +marginals depends crucially on the quality of the submodular extension. To +identify the best possible extension, we show an equivalence between the +submodular extensions of the energy and the objective functions of linear +programming (LP) relaxations for the corresponding MAP estimation problem. This +allows us to (i) establish the worst-case optimality of the submodular +extension for Potts model used in the literature; (ii) identify the worst-case +optimal submodular extension for the more general class of metric labeling; and +(iii) efficiently compute the marginals for the widely used dense CRF model +with the help of a recently proposed Gaussian filtering method. Using synthetic +and real data, we show that our approach provides comparable upper bounds on +the log-partition function to those obtained using tree-reweighted message +passing (TRW) in cases where the latter is computationally feasible. +Importantly, unlike TRW, our approach provides the first practical algorithm to +compute an upper bound on the dense CRF model. +" +Superconducting energy gap in $\rm Ba_{1-x}K_xBiO_3$: Temperature dependence," The superconducting energy gap of $\rm Ba_{1-x}K_xBiO_3$ has been measured by +tunneling. Despite the fact that the sample was macroscopically single phase +with very sharp superconducting transition $T_c$ at 32~$K$, some of the +measured tunnel junctions made by point contacts between silver tip and single +crystal of $\rm Ba_{1-x}K_xBiO_3$ had lower transition at 20~$K$. Local +variation of the potassium concentration as well as oxygen deficiency in $\rm +Ba_{1-x}K_xBiO_3$ at the place where the point contact is made can account for +the change of $T_c$. The conductance curves of the tunnel junctions reveal the +BCS behavior with a small broadening of the superconducting-gap structure. A +value of the energy gap scales with $T_c$. The reduced gap amounts to +$2\Delta/kT_c = 4÷4.3$ indicating a medium coupling strength. Temperature +dependence of the energy gap follows the BCS prediction. +" +Square functions and the Hamming cube: Duality," For $110^{9.5}M_\odot$. About $40\%$ of the CANDELS galaxies +have SFHs whose maximum occurs at or near the epoch of observation. The Dense +Basis method is scalable and offers a general approach to a broad class of +data-science problems. +" +Two-stage multipolar ordering in Pr(TM)$_2$Al$_{20}$ Kondo materials," Among heavy fermion materials, there is a set of rare-earth intermetallics +with non-Kramers Pr$^{3+}$ $4f^2$ moments which exhibit a rich phase diagram +with intertwined quadrupolar orders, superconductivity, and non-Fermi liquid +behavior. However, more subtle broken symmetries such as multipolar orders in +these Kondo materials remain poorly studied. Here, we argue that multi-spin +interactions between local moments beyond the conventional two-spin exchange +must play an important role in Kondo materials near the ordered to heavy Fermi +liquid transition. We show that this drives a plethora of phases with +coexisting multipolar orders and multiple thermal phase transitions, providing +a natural framework for interpreting experiments on the Pr(TM)$_2$Al$_{20}$ +class of compounds. +" +Spin-Momentum Locking in the Near Field of Metal Nanoparticles," Light carries both spin and momentum. Spin-orbit interactions of light come +into play at the subwavelength scale of nano-optics and nano-photonics, where +they determine the behaviour of light. These phenomena, in which the spin +affects and controls the spatial degrees of freedom of light, are attracting +rapidly growing interest. Here we present results on the spin-momentum locking +in the near field of metal nanostructures supporting localized surface +resonances. These systems can confine light to very small dimensions below the +diffraction limit, leading to a striking near-field enhancement. In contrast to +the propagating evanescent waves of surface plasmon-polariton modes, the +electromagnetic near-field of localized surface resonances does not exhibit a +definite position-independent momentum or polarization. Our results can be +useful to investigate the spin-orbit interactions of light for complex +evanescent fields. Note that the spin of the incident light can control the +rotation direction of the canonical momentum. +" +Persistence and extinction for stochastic ecological difference equations with feedbacks," Species' densities experience both internal frequency-dependent feedbacks due +to population structure and external feedbacks from abiotic factors. These +feedbacks can determine whether populations persist or go extinct, and may be +subject to stochastic fluctuations. To provide a general mathematical framework +for studying these effects, we develop theorems for stochastic persistence and +exclusion for stochastic ecological difference equations accounting for +feedbacks. Specifically, we use the stochastic analog of average Lyapunov +functions to develop sufficient and necessary conditions for (i) all population +densities spending little time at low densities i.e. stochastic persistence, +and (ii) population trajectories asymptotically approaching the extinction set +with positive probability. For (i) and (ii), respectively, we provide +quantitative estimates on the fraction of time that the system is near the +extinction set, and the probability of asymptotic extinction as a function of +the initial state of the system. Furthermore, in the case of persistence, we +provide lower bounds for the time to escape neighborhoods of the extinction +set. To illustrate the applicability of our results, we analyze models of +evolutionary games, stochastic Lotka-Volterra difference equations, trait +evolution, and spatially structured disease dynamics. Our analysis of these +models demonstrates environmental stochasticity facilitates coexistence in the +hawk-dove game, but inhibits coexistence in the rock-paper-scissors game and +Lotka-Volterra predator-prey model. Furthermore, environmental fluctuations +with positive auto-correlations can promote persistence of evolving populations +and disease persistence in patchy landscapes. While these results help close +the gap between persistence theories for deterministic and stochastic systems, +we conclude by highlighting several challenges for future research. +" +Stochastic reachability of a target tube: Theory and computation," Given a discrete-time stochastic system and a time-varying sequence of target +sets, we consider the problem of maximizing the probability of the state +evolving within this tube under bounded control authority. This problem +subsumes existing work on stochastic viability and terminal hitting-time +stochastic reach-avoid problems. Of special interest is the stochastic reach +set, the set of all initial states from which the probability of staying in the +target tube is above a desired threshold. This set provides non-trivial +information about the safety and the performance of the system. In this paper, +we provide sufficient conditions under which the stochastic reach set is +closed, compact, and convex. We also discuss an underapproximative +interpolation technique for stochastic reach sets. Finally, we propose a +scalable, grid-free, and anytime algorithm that computes a polytopic +underapproximation of the stochastic reach set and synthesizes an open-loop +controller using convex optimization. We demonstrate the efficacy and +scalability of our approach over existing techniques using three numerical +simulations --- stochastic viability of a chain of integrators, stochastic +reach-avoid computation for a satellite rendezvous and docking problem, and +stochastic reachability of a target tube for a Dubin's car with a known turning +rate sequence. +" +Bright soliton to quantum droplet transition in a mixture of Bose-Einstein condensates," Attractive Bose-Einstein condensates can host two types of macroscopic +self-bound states of different nature: bright solitons and quantum liquid +droplets. Here, we investigate the connection between them with a Bose-Bose +mixture confined in an optical waveguide. We develop a simple theoretical model +to show that, depending on atom number and interaction strength, solitons and +droplets can be smoothly connected or remain distinct states coexisting only in +a bi-stable region. We experimentally measure their spin composition, extract +their density for a broad range of parameters and map out the boundary of the +region separating solitons from droplets. +" +Upper bounds for the spectral function on homogeneous spaces via volume growth," We use spectral embeddings to give upper bounds on the spectral function of +the Laplace--Beltrami operator on homogeneous spaces in terms of the volume +growth of balls. In the case of compact manifolds, our bounds extend the 1980 +lower bound of Peter Li for the smallest positive eigenvalue to all +eigenvalues. We also improve Li's bound itself. Our bounds translate to +explicit upper bounds on the heat kernel for both compact and noncompact +homogeneous spaces. +" +An impossibility theorem for gerrymandering," The U.S. Supreme Court is currently deliberating over whether a proposed +mathematical formula should be used to detect unconstitutional partisan +gerrymandering. We show that in some cases, this formula will only flag +bizarrely shaped districts as potentially constitutional. +" +Cooling-Rate Effects in Sodium Silicate Glasses: Bridging the Gap between Molecular Dynamics Simulations and Experiments," Although molecular dynamics (MD) simulations are commonly used to predict the +structure and properties of glasses, they are intrinsically limited to short +time scales, necessitating the use of fast cooling rates. It is therefore +challenging to compare results from MD simulations to experimental results for +glasses cooled on typical laboratory time scales. Based on MD simulations of a +sodium silicate glass with varying cooling rate (from 0.01 to 100 K/ps), here +we show that thermal history primarily affects the medium-range order +structure, while the short-range order is largely unaffected over the range of +cooling rates simulated. This results in a decoupling between the enthalpy and +volume relaxation functions, where the enthalpy quickly plateaus as the cooling +rate decreases, whereas density exhibits a slower relaxation. Finally, we +demonstrate that the outcomes of MD simulations can be meaningfully compared to +experimental values if properly extrapolated to slower cooling rates. +" +Representation Learning for Scale-free Networks," Network embedding aims to learn the low-dimensional representations of +vertexes in a network, while structure and inherent properties of the network +is preserved. Existing network embedding works primarily focus on preserving +the microscopic structure, such as the first- and second-order proximity of +vertexes, while the macroscopic scale-free property is largely ignored. +Scale-free property depicts the fact that vertex degrees follow a heavy-tailed +distribution (i.e., only a few vertexes have high degrees) and is a critical +property of real-world networks, such as social networks. In this paper, we +study the problem of learning representations for scale-free networks. We first +theoretically analyze the difficulty of embedding and reconstructing a +scale-free network in the Euclidean space, by converting our problem to the +sphere packing problem. Then, we propose the ""degree penalty"" principle for +designing scale-free property preserving network embedding algorithm: punishing +the proximity between high-degree vertexes. We introduce two implementations of +our principle by utilizing the spectral techniques and a skip-gram model +respectively. Extensive experiments on six datasets show that our algorithms +are able to not only reconstruct heavy-tailed distributed degree distribution, +but also outperform state-of-the-art embedding models in various network mining +tasks, such as vertex classification and link prediction. +" +"Extractive Summarization: Limits, Compression, Generalized Model and Heuristics"," Due to its promise to alleviate information overload, text summarization has +attracted the attention of many researchers. However, it has remained a serious +challenge. Here, we first prove empirical limits on the recall (and F1-scores) +of extractive summarizers on the DUC datasets under ROUGE evaluation for both +the single-document and multi-document summarization tasks. Next we define the +concept of compressibility of a document and present a new model of +summarization, which generalizes existing models in the literature and +integrates several dimensions of the summarization, viz., abstractive versus +extractive, single versus multi-document, and syntactic versus semantic. +Finally, we examine some new and existing single-document summarization +algorithms in a single framework and compare with state of the art summarizers +on DUC data. +" +Backprop as Functor: A compositional perspective on supervised learning," A supervised learning algorithm searches over a set of functions $A \to B$ +parametrised by a space $P$ to find the best approximation to some ideal +function $f\colon A \to B$. It does this by taking examples $(a,f(a)) \in +A\times B$, and updating the parameter according to some rule. We define a +category where these update rules may be composed, and show that gradient +descent---with respect to a fixed step size and an error function satisfying a +certain property---defines a monoidal functor from a category of parametrised +functions to this category of update rules. This provides a structural +perspective on backpropagation, as well as a broad generalisation of neural +networks. +" +Entropic selection of concepts unveils hidden topics in documents corpora," The organization and evolution of science has recently become itself an +object of scientific quantitative investigation, thanks to the wealth of +information that can be extracted from scientific documents, such as citations +between papers and co-authorship between researchers. However, only few studies +have focused on the concepts that characterize full documents and that can be +extracted and analyzed, revealing the deeper organization of scientific +knowledge. Unfortunately, several concepts can be so common across documents +that they hinder the emergence of the underlying topical structure of the +document corpus, because they give rise to a large amount of spurious and +trivial relations among documents. To identify and remove common concepts, we +introduce a method to gauge their relevance according to an objective +information-theoretic measure related to the statistics of their occurrence +across the document corpus. After progressively removing concepts that, +according to this metric, can be considered as generic, we find that the topic +organization displays a correspondingly more refined structure. +" +An Optimal Multi-layer Reinsurance Policy under Conditional Tail Expectation," A usual reinsurance policy for insurance companies admits one or two layers +of the payment deductions. Under optimal criterion of minimizing the +conditional tail expectation (CTE) risk measure of the insurer's total risk, +this article generalized an optimal stop-loss reinsurance policy to an optimal +multi-layer reinsurance policy. To achieve such optimal multi-layer reinsurance +policy, this article starts from a given optimal stop-loss reinsurance policy +$f(\cdot).$ In the first step, it cuts down an interval $[0,\infty)$ into two +intervals $[0,M_1)$ and $[M_1,\infty).$ By shifting the origin of Cartesian +coordinate system to $(M_{1},f(M_{1})),$ and showing that under the $CTE$ +criteria $f(x)I_{[0, M_1)}(x)+(f(M_1)+f(x-M_1))I_{[M_1,\infty)}(x)$ is, again, +an optimal policy. This extension procedure can be repeated to obtain an +optimal k-layer reinsurance policy. Finally, unknown parameters of the optimal +multi-layer reinsurance policy are estimated using some additional appropriate +criteria. Three simulation-based studies have been conducted to demonstrate: +({\bf 1}) The practical applications of our findings and ({\bf 2}) How one may +employ other appropriate criteria to estimate unknown parameters of an optimal +multi-layer contract. The multi-layer reinsurance policy, similar to the +original stop-loss reinsurance policy is optimal, in a same sense. Moreover it +has some other optimal criteria which the original policy does not have. Under +optimal criterion of minimizing general translative and monotone risk measure +$\rho(\cdot)$ of {\it either} the insurer's total risk {\it or} both the +insurer's and the reinsurer's total risks, this article (in its discussion) +also extends a given optimal reinsurance contract $f(\cdot)$ to a multi-layer +and continuous reinsurance policy. +" +"Tusnády's problem, the transference principle, and non-uniform QMC sampling"," It is well-known that for every $N \geq 1$ and $d \geq 1$ there exist point +sets $x_1, \dots, x_N \in [0,1]^d$ whose discrepancy with respect to the +Lebesgue measure is of order at most $(\log N)^{d-1} N^{-1}$. In a more general +setting, the first author proved together with Josef Dick that for any +normalized measure $\mu$ on $[0,1]^d$ there exist points $x_1, \dots, x_N$ +whose discrepancy with respect to $\mu$ is of order at most $(\log +N)^{(3d+1)/2} N^{-1}$. The proof used methods from combinatorial mathematics, +and in particular a result of Banaszczyk on balancings of vectors. In the +present note we use a version of the so-called transference principle together +with recent results on the discrepancy of red-blue colorings to show that for +any $\mu$ there even exist points having discrepancy of order at most $(\log +N)^{d-\frac12} N^{-1}$, which is almost as good as the discrepancy bound in the +case of the Lebesgue measure. +" +HBT+: an improved code for finding subhalos and building merger trees in cosmological simulations," Dark matter subhalos are the remnants of (incomplete) halo mergers. +Identifying them and establishing their evolutionary links in the form of +merger trees is one of the most important applications of cosmological +simulations. The Hierachical Bound-Tracing (HBT) code identifies halos as they +form and tracks their evolution as they merge, simultaneously detecting +subhalos and building their merger trees. Here we present a new implementation +of this approach, HBT+, that is much faster, more user friendly, and more +physically complete than the original code. Applying HBT+ to cosmological +simulations we show that both the subhalo mass function and the peak-mass +function are well fit by similar double-Schechter functions.The ratio between +the two is highest at the high mass end, reflecting the resilience of massive +subhalos that experience substantial dynamical friction but limited tidal +stripping. The radial distribution of the most massive subhalos is more +concentrated than the universal radial distribution of lower mass subhalos. +Subhalo finders that work in configuration space tend to underestimate the +masses of massive subhalos, an effect that is stronger in the host centre. This +may explain, at least in part, the excess of massive subhalos in galaxy cluster +centres inferred from recent lensing observations. We demonstrate that the +peak-mass function is a powerful diagnostic of merger tree defects, and the +merger trees constructed using HBT+ do not suffer from the missing or switched +links that tend to afflict merger trees constructed from more conventional halo +finders. We make the HBT+ code publicly available. +" +On Statistical Non-Significance," Significance tests are probably the most extended form of inference in +empirical research, and significance is often interpreted as providing greater +informational content than non-significance. In this article we show, however, +that rejection of a point null often carries very little information, while +failure to reject may be highly informative. This is particularly true in +empirical contexts where data sets are large and where there are rarely reasons +to put substantial prior probability on a point null. Our results challenge the +usual practice of conferring point null rejections a higher level of scientific +significance than non-rejections. In consequence, we advocate a visible +reporting and discussion of non-significant results in empirical practice. +" +On the Interplay between Strong Regularity and Graph Densification," In this paper we analyze the practical implications of Szemerédi's +regularity lemma in the preservation of metric information contained in large +graphs. To this end, we present a heuristic algorithm to find regular +partitions. Our experiments show that this method is quite robust to the +natural sparsification of proximity graphs. In addition, this robustness can be +enforced by graph densification. +" +Nonabelian Landau-Ginzburg orbifolds and Calabi-Yau/Landau-Ginzburg correspondence," In this paper, we study the bigraded vector space structure of +Landau-Ginzburg orbifolds. We prove the formula for the generating function of +the Hodge numbers of possibly nonabelian Landau-Ginzburg orbifolds. As an +application, we calculate the Hodge numbers for all nondegenerate quintic +homogeneous polynomials with five variables. These results yield an evidence +for the Calabi-Yau/Landau-Ginzburg correspondence between the Calabi-Yau +geometries and the Landau-Ginzburg B-models. +" +Setpoint Tracking with Partially Observed Loads," We use online convex optimization (OCO) for setpoint tracking with uncertain, +flexible loads. We consider full feedback from the loads, bandit feedback, and +two intermediate types of feedback: partial bandit where a subset of the loads +are individually observed and the rest are observed in aggregate, and Bernoulli +feedback where in each round the aggregator receives either full or bandit +feedback according to a known probability. We give sublinear regret bounds in +all cases. We numerically evaluate our algorithms on examples with +thermostatically controlled loads and electric vehicles. +" +Size effect in the ionization energy of PAH clusters," We report the first experimental measurement of the near-threshold +photo-ionization spectra of polycyclic aromatic hydrocarbon clusters made of +pyrene C16H10 and coronene C24H12, obtained using imaging photoelectron +photoion coincidence spectrometry with a VUV synchrotron beamline. The +experimental results of the ionization energy are confronted to calculated ones +obtained from simulations using dedicated electronic structure treatment for +large ionized molecular clusters. Experiment and theory consistently find a +decrease of the ionization energy with cluster size. The inclusion of +temperature effects in the simulations leads to a lowering of this energy and +to a quantitative agreement with the experiment. In the case of pyrene, both +theory and experiment show a discontinuity in the IE trend for the hexamer. +" +Runaway Feedback Loops in Predictive Policing," Predictive policing systems are increasingly used to determine how to +allocate police across a city in order to best prevent crime. Discovered crime +data (e.g., arrest counts) are used to help update the model, and the process +is repeated. Such systems have been empirically shown to be susceptible to +runaway feedback loops, where police are repeatedly sent back to the same +neighborhoods regardless of the true crime rate. +In response, we develop a mathematical model of predictive policing that +proves why this feedback loop occurs, show empirically that this model exhibits +such problems, and demonstrate how to change the inputs to a predictive +policing system (in a black-box manner) so the runaway feedback loop does not +occur, allowing the true crime rate to be learned. Our results are +quantitative: we can establish a link (in our model) between the degree to +which runaway feedback causes problems and the disparity in crime rates between +areas. Moreover, we can also demonstrate the way in which \emph{reported} +incidents of crime (those reported by residents) and \emph{discovered} +incidents of crime (i.e. those directly observed by police officers dispatched +as a result of the predictive policing algorithm) interact: in brief, while +reported incidents can attenuate the degree of runaway feedback, they cannot +entirely remove it without the interventions we suggest. +" +AWAKE-related benchmarking tests for simulation codes," Two tests are described that were developed for benchmarking and comparison +of numerical codes in the context of AWAKE experiment. +" +Ricci-flat metrics on the cone over $\mathbb{CP}^2 \# \overline{\mathbb{CP}^2}$," We describe a framework for constructing the Ricci-flat metrics on the total +space of the canonical bundle over $\mathbb{CP}^2 \# \overline{\mathbb{CP}^2}$ +(the del Pezzo surface of rank one). We construct explicitly the first-order +deformation of the so-called `orthotoric metric' on this manifold. We also show +that the deformation of the corresponding conformal Killing-Yano form does not +exist. +" +Solving Zero-sum Games using Best Response Oracles with Applications to Search Games," We present efficient algorithms for computing optimal or approximately +optimal strategies in a zero-sum game for which Player I has n pure strategies +and Player II has an arbitrary number of pure strategies. We assume that for +any given mixed strategy of Player I, a best response or ""approximate"" best +response of Player II can be found by an oracle in time polynomial in n. We +then show how our algorithms may be applied to several search games with +applications to security and counter-terrorism. We evaluate our main algorithm +experimentally on a prototypical search game. Our results show it performs well +compared to an existing, well-known algorithm for solving zero-sum games that +can also be used to solve search games, given a best response oracle. +" +On the idea of a new artificial intelligence based optimization algorithm inspired from the nature of vortex," In this paper, the idea of a new artificial intelligence based optimization +algorithm, which is inspired from the nature of vortex, has been provided +briefly. As also a bio-inspired computation algorithm, the idea is generally +focused on a typical vortex flow / behavior in nature and inspires from some +dynamics that are occurred in the sense of vortex nature. Briefly, the +algorithm is also a swarm-oriented evolutional problem solution approach; +because it includes many methods related to elimination of weak swarm members +and trying to improve the solution process by supporting the solution space via +new swarm members. In order have better idea about success of the algorithm; it +has been tested via some benchmark functions. At this point, the obtained +results show that the algorithm can be an alternative to the literature in +terms of single-objective optimization solution ways. Vortex Optimization +Algorithm (VOA) is the name suggestion by the authors; for this new idea of +intelligent optimization approach. +" +Sulfur Hazes in Giant Exoplanet Atmospheres: Impacts on Reflected Light Spectra," Recent work has shown that sulfur hazes may arise in the atmospheres of some +giant exoplanets due to the photolysis of H$_{2}$S. We investigate the impact +such a haze would have on an exoplanet's geometric albedo spectrum and how it +may affect the direct imaging results of WFIRST, a planned NASA space +telescope. For temperate (250 K $<$ T$_{\rm eq}$ $<$ 700 K) Jupiter--mass +planets, photochemical destruction of H$_{2}$S results in the production of +$\sim$1 ppmv of \seight between 100 and 0.1 mbar, which, if cool enough, will +condense to form a haze. Nominal haze masses are found to drastically alter a +planet's geometric albedo spectrum: whereas a clear atmosphere is dark at +wavelengths between 0.5 and 1 $\mu$m due to molecular absorption, the addition +of a sulfur haze boosts the albedo there to $\sim$0.7 due to scattering. Strong +absorption by the haze shortward of 0.4 $\mu$m results in albedos $<$0.1, in +contrast to the high albedos produced by Rayleigh scattering in a clear +atmosphere. As a result, the color of the planet shifts from blue to orange. +The existence of a sulfur haze masks the molecular signatures of methane and +water, thereby complicating the characterization of atmospheric composition. +Detection of such a haze by WFIRST is possible, though discriminating between a +sulfur haze and any other highly reflective, high altitude scatterer will +require observations shortward of 0.4 $\mu$m, which is currently beyond +WFIRST's design. +" +Real-time fMRI neurofeedback training of the amygdala activity with simultaneous EEG in veterans with combat-related PTSD," Posttraumatic stress disorder (PTSD) is a chronic and disabling +neuropsychiatric disorder characterized by insufficient top-down modulation of +the amygdala activity by the prefrontal cortex. Real-time fMRI neurofeedback +(rtfMRI-nf) is an emerging method with potential for modifying the +amygdala-prefrontal interactions. We report the first controlled emotion +self-regulation study in veterans with combat-related PTSD utilizing rtfMRI-nf +of the amygdala activity. PTSD patients in the experimental group (EG, n=20) +learned to upregulate BOLD activity of the left amygdala (LA) using rtfMRI-nf +during a happy emotion induction task. PTSD patients in the control group (CG, +n=11) were provided with a sham rtfMRI-nf. The study included three rtfMRI-nf +training sessions, and EEG recordings were performed simultaneously with fMRI. +PTSD severity was assessed using the Clinician-Administered PTSD Scale (CAPS). +The EG participants showed a significant reduction in total CAPS ratings, +including significant reductions in avoidance and hyperarousal symptoms. +Overall, 80% of the EG participants demonstrated clinically meaningful +reductions in CAPS ratings, compared to 38% in the CG. During the first +session, fMRI connectivity of the LA with the orbitofrontal cortex and the +dorsolateral prefrontal cortex (DLPFC) was progressively enhanced, and this +enhancement significantly and positively correlated with initial CAPS ratings. +Left-lateralized enhancement in upper alpha EEG coherence also exhibited a +significant positive correlation with the initial CAPS. Reduction in PTSD +severity between the first and last rtfMRI-nf sessions significantly correlated +with enhancement in functional connectivity between the LA and the left DLPFC. +Our results demonstrate that the rtfMRI-nf of the amygdala activity has the +potential to correct the amygdala-prefrontal functional connectivity +deficiencies specific to PTSD. +" +Approximations from Anywhere and General Rough Sets," Not all approximations arise from information systems. The problem of fitting +approximations, subjected to some rules (and related data), to information +systems in a rough scheme of things is known as the \emph{inverse problem}. The +inverse problem is more general than the duality (or abstract representation) +problems and was introduced by the present author in her earlier papers. From +the practical perspective, a few (as opposed to one) theoretical frameworks may +be suitable for formulating the problem itself. \emph{Granular operator spaces} +have been recently introduced and investigated by the present author in her +recent work in the context of antichain based and dialectical semantics for +general rough sets. The nature of the inverse problem is examined from +number-theoretic and combinatorial perspectives in a higher order variant of +granular operator spaces and some necessary conditions are proved. The results +and the novel approach would be useful in a number of unsupervised and semi +supervised learning contexts and algorithms. +" +The minimal volume of simplices containing a convex body," Let $K \subset \mathbb R^n$ be a convex body with barycenter at the origin. +We show there is a simplex $S \subset K$ having also barycenter at the origin +such that $\left(\frac{vol(S)}{vol(K)}\right)^{1/n} \geq \frac{c}{\sqrt{n}},$ +where $c>0$ is an absolute constant. This is achieved using stochastic +geometric techniques. Precisely, if $K$ is in isotropic position, we present a +method to find centered simplices verifying the above bound that works with +very high probability. +As a consequence, we provide correct asymptotic estimates on an old problem +in convex geometry. Namely, we show that the simplex $S_{min}(K)$ of minimal +volume enclosing a given convex body $K \subset \mathbb R^n$, fulfills the +following inequality $$\left(\frac{vol(S_{min}(K))}{vol(K)}\right)^{1/n} \leq d +\sqrt{n},$$ for some absolute constant $d>0$. Up to the constant, the estimate +cannot be lessened. +" +Socioeconomic bias in influenza surveillance," Individuals in low socioeconomic brackets are considered at-risk for +developing influenza-related complications and often exhibit higher than +average influenza-related hospitalization rates. This disparity has been +attributed to various factors, including restricted access to preventative and +therapeutic health care, limited sick leave, and household structure. Adequate +influenza surveillance in these at-risk populations is a critical precursor to +accurate risk assessments and effective intervention. However, the United +States of America's primary national influenza surveillance system (ILINet) +monitors outpatient healthcare providers, which may be largely inaccessible to +lower socioeconomic populations. Recent initiatives to incorporate +internet-source and hospital electronic medical records data into surveillance +systems seek to improve the timeliness, coverage, and accuracy of outbreak +detection and situational awareness. Here, we use a flexible statistical +framework for integrating multiple surveillance data sources to evaluate the +adequacy of traditional (ILINet) and next generation (BioSense 2.0 and Google +Flu Trends) data for situational awareness of influenza across poverty levels. +We find that zip codes in the highest poverty quartile are a critical +blind-spot for ILINet that the integration of next generation data fails to +ameliorate. +" +Java Code Analysis and Transformation into AWS Lambda Functions," Software developers are faced with the issue of either adapting their +programming model to the execution model (e.g. cloud platforms) or finding +appropriate tools to adapt the model and code automatically. A recent execution +model which would benefit from automated enablement is Function-as-a-Service. +Automating this process requires a pipeline which includes steps for code +analysis, transformation and deployment. In this paper, we outline the design +and runtime characteristics of Podilizer, a tool which implements the pipeline +specifically for Java source code as input and AWS Lambda as output. We +contribute technical and economic metrics about this concrete 'FaaSification' +process by observing the behaviour of Podilizer with two representative Java +software projects. +" +Criteria for Finite Difference Groebner Bases of Normal Binomial Difference Ideals," In this paper, we give decision criteria for normal binomial difference +polynomial ideals in the univariate difference polynomial ring F{y} to have +finite difference Groebner bases and an algorithm to compute the finite +difference Groebner bases if these criteria are satisfied. The novelty of these +criteria lies in the fact that complicated properties about difference +polynomial ideals are reduced to elementary properties of univariate +polynomials in Z[x]. +" +$p$-Euler equations and $p$-Navier-Stokes equations," We propose in this work new systems of equations which we call $p$-Euler +equations and $p$-Navier-Stokes equations. $p$-Euler equations are derived as +the Euler-Lagrange equations for the action represented by the Benamou-Brenier +characterization of Wasserstein-$p$ distances, with incompressibility +constraint. $p$-Euler equations have similar structures with the usual Euler +equations but the `momentum' is the signed ($p-1$)-th power of the velocity. In +the 2D case, the $p$-Euler equations have streamfunction-vorticity formulation, +where the vorticity is given by the $p$-Laplacian of the streamfunction. By +adding diffusion presented by $\gamma$-Laplacian of the velocity, we obtain +what we call $p$-Navier-Stokes equations. If $\gamma=p$, the {\it a priori} +energy estimates for the velocity and momentum have dual symmetries. Using +these energy estimates and a time-shift estimate, we show the global existence +of weak solutions for the $p$-Navier-Stokes equations in $\mathbb{R}^d$ for +$\gamma=p$ and $p\ge d\ge 2$ through a compactness criterion. +" +Unconventional minimal subtraction and Bogoliubov-Parasyuk-Hepp-Zimmermann: massive scalar theory and critical exponents," We introduce a simpler although unconventional minimal subtraction +renormalization procedure in the case of a massive scalar $\lambda \phi^{4}$ +theory in Euclidean space using dimensional regularization. We show that this +method is very similar to its counterpart in massless field theory. In +particular, the choice of using the bare mass at higher perturbative order +instead of employing its tree-level counterpart eliminates all tadpole +insertions at that order. As an application, we compute diagrammatically the +critical exponents $\eta$ and $\nu$ at least up to two loops. We perform an +explicit comparison with the Bogoliubov-Parasyuk-Hepp-Zimmermann ($BPHZ$) +method at the same loop order, show that the proposed method requires fewer +diagrams and establish a connection between the two approaches. +" +A covering theorem for singular measures in the Euclidean space," We prove that for any singular measure $\mu$ on $\mathbb{R}^n$ it is possible +to cover $\mu$-almost every point with $n$ families of Lipschitz slabs of +arbitrarily small total width. More precisely, up to a rotation, for every +$\delta>0$ there are $n$ countable families of $1$-Lipschitz functions +$\{f_i^1\}_{i\in\mathbb{N}},\ldots, \{f_i^n\}_{i\in\mathbb{N}},$ +$f_i^j:\{x_j=0\}\subset\mathbb{R}^n\to\mathbb{R}$, and $n$ sequences of +positive real numbers $\{\varepsilon_i^1\}_{i\in\mathbb{N}},\ldots, +\{\varepsilon_i^n\}_{i\in\mathbb{N}}$ such that, denoting $\hat x_j$ the +orthogonal projection of the point $x$ onto $\{x_j=0\}$ and +$$I_i^j:=\{x=(x_1,\ldots,x_n)\in \mathbb{R}^n:f_i^j(\hat x_j)-\varepsilon_i^j< +x_j< f_i^j(\hat x_j)+\varepsilon_i^j\},$$ it holds +$\sum_{i,j}\varepsilon_i^j\leq \delta$ and +$\mu(\mathbb{R}^n\setminus\bigcup_{i,j}I_i^j)=0.$ +We apply this result to show that, if $\mu$ is not absolutely continuous, it +is possible to approximate the identity with a sequence $g_h$ of smooth +equi-Lipschitz maps satisfying +$$\limsup_{h\to\infty}\int_{\mathbb{R}^n}{\rm{det}}(\nabla g_h) +d\mu<\mu(\mathbb{R}^n).$$ From this, we deduce a simple proof of the fact that +every top-dimensional Ambrosio-Kirchheim metric current in $\mathbb{R}^n$ is a +Federer-Fleming flat chain. +" +Linear-Time Algorithms for Maximum-Weight Induced Matchings and Minimum Chain Covers in Convex Bipartite Graphs," A bipartite graph $G=(U,V,E)$ is convex if the vertices in $V$ can be +linearly ordered such that for each vertex $u\in U$, the neighbors of $u$ are +consecutive in the ordering of $V$. An induced matching $H$ of $G$ is a +matching such that no edge of $E$ connects endpoints of two different edges of +$H$. We show that in a convex bipartite graph with $n$ vertices and $m$ +weighted edges, an induced matching of maximum total weight can be computed in +$O(n+m)$ time. An unweighted convex bipartite graph has a representation of +size $O(n)$ that records for each vertex $u\in U$ the first and last neighbor +in the ordering of $V$. Given such a compact representation, we compute an +induced matching of maximum cardinality in $O(n)$ time. +In convex bipartite graphs, maximum-cardinality induced matchings are dual to +minimum chain covers. A chain cover is a covering of the edge set by chain +subgraphs, that is, subgraphs that do not contain induced matchings of more +than one edge. Given a compact representation, we compute a representation of a +minimum chain cover in $O(n)$ time. If no compact representation is given, the +cover can be computed in $O(n+m)$ time. +All of our algorithms achieve optimal running time for the respective problem +and model. Previous algorithms considered only the unweighted case, and the +best algorithm for computing a maximum-cardinality induced matching or a +minimum chain cover in a convex bipartite graph had a running time of $O(n^2)$. +" +Enhanced steady-state dissolution flux in reactive convective dissolution," Chemical reactions can accelerate, slow down or even be at the very origin of +the development of dissolution-driven convection in partially miscible +stratifications, when they impact the density profile in the host fluid phase. +We numerically analyze the dynamics of this reactive convective dissolution in +the fully developed non-linear regime for a phase A dissolving into a host +layer containing a dissolved reactant B. We show that for a general +A+B$\rightarrow$C reaction in solution, the dynamics vary with the Rayleigh +numbers of the chemical species, i.e. with the nature of the chemicals in the +host phase. Depending on whether the reaction slows down, accelerates or is at +the origin of the development of convection, the spatial distributions of +species A, B or C, the dissolution flux and the reaction rate are different. We +show that chemical reactions enhance the steady-state flux as they consume A +and can induce more intense convection than in the absence of reactions. This +result is important in the context of CO$_2$ geological sequestration where +quantifying the storage rate of CO$_2$ dissolving into the host oil or aqueous +phase is crucial to assess the efficiency and the safety of the project. +" +Vapor Condensed and Supercooled Glassy Nanoclusters," We use molecular simulation to study the structural and dynamic properties of +glassy nanoclusters formed both through the direct condensation of the vapor +below the glass transition temperature, without the presence of a substrate, +and \textit{via} the slow supercooling of unsupported liquid nanodroplets. An +analysis of local structure using Voronoi polyhedra shows that the energetic +stability of the clusters is characterized by a large, increasing fraction of +bicapped square antiprism motifs. We also show that nanoclusters with similar +inherent structure energies are structurally similar, independent of their +history, which suggests the supercooled clusters access the same low energy +regions of the potential energy landscape as the vapor condensed clusters +despite their different methods of formation. By measuring the intermediate +scattering function at different radii from the cluster center, we find that +the relaxation dynamics of the clusters are inhomogeneous, with the core +becoming glassy above the glass transition temperature while the surface +remains mobile at low temperatures. This helps the clusters sample the highly +stable, low energy structures on the potential energy surface. Our work +suggests the nanocluster systems are structurally more stable than the +ultra-stable glassy thin films, formed through vapor deposition onto a cold +substrate, but the nanoclusters do not exhibit the superheating effects +characteristic of the ultra-stable glass states. +" +The Dayenu Boolean Function Is Almost Always True!," The Boolean function implicit in the famous Dayenu song, sung at the Passover +meal, is expressed in full conjunctive normal form, and it is proved that if +there are n miracles the number of truth-vectors satisfying it is $2^n -(n+1)$. +" +Autocalibrating and Calibrationless Parallel Magnetic Resonance Imaging as a Bilinear Inverse Problem," Modern reconstruction methods for magnetic resonance imaging (MRI) exploit +the spatially varying sensitivity profiles of receive-coil arrays as additional +source of information. This allows to reduce the number of time-consuming +Fourier-encoding steps by undersampling. The receive sensitivities are a priori +unknown and influenced by geometry and electric properties of the (moving) +subject. For optimal results, they need to be estimated jointly with the image +from the same undersampled measurement data. Formulated as an inverse problem, +this leads to a bilinear reconstruction problem related to multi-channel blind +deconvolution. In this work, we will discuss some recently developed approaches +for the solution of this problem. +" +"Friend, Collaborator, Student, Manager: How Design of an AI-Driven Game Level Editor Affects Creators"," Machine learning advances have afforded an increase in algorithms capable of +creating art, music, stories, games, and more. However, it is not yet +well-understood how machine learning algorithms might best collaborate with +people to support creative expression. To investigate how practicing designers +perceive the role of AI in the creative process, we developed a game level +design tool for Super Mario Bros.-style games with a built-in AI level +designer. In this paper we discuss our design of the Morai Maker intelligent +tool through two mixed-methods studies with a total of over one-hundred +participants. Our findings are as follows: (1) level designers vary in their +desired interactions with, and role of, the AI, (2) the AI prompted the level +designers to alter their design practices, and (3) the level designers +perceived the AI as having potential value in their design practice, varying +based on their desired role for the AI. +" +Approximation Algorithms for Barrier Sweep Coverage," Time-varying coverage, namely sweep coverage is a recent development in the +area of wireless sensor networks, where a small number of mobile sensors sweep +or monitor comparatively large number of locations periodically. In this +article we study barrier sweep coverage with mobile sensors where the barrier +is considered as a finite length continuous curve on a plane. The coverage at +every point on the curve is time-variant. We propose an optimal solution for +sweep coverage of a finite length continuous curve. Usually energy source of a +mobile sensor is battery with limited power, so energy restricted sweep +coverage is a challenging problem for long running applications. We propose an +energy restricted sweep coverage problem where every mobile sensors must visit +an energy source frequently to recharge or replace its battery. We propose a +$\frac{13}{3}$-approximation algorithm for this problem. The proposed algorithm +for multiple curves achieves the best possible approximation factor 2 for a +special case. We propose a 5-approximation algorithm for the general problem. +As an application of the barrier sweep coverage problem for a set of line +segments, we formulate a data gathering problem. In this problem a set of +mobile sensors is arbitrarily monitoring the line segments one for each. A set +of data mules periodically collects the monitoring data from the set of mobile +sensors. We prove that finding the minimum number of data mules to collect data +periodically from every mobile sensor is NP-hard and propose a 3-approximation +algorithm to solve it. +" +Reinforcement Learning via Recurrent Convolutional Neural Networks," Deep Reinforcement Learning has enabled the learning of policies for complex +tasks in partially observable environments, without explicitly learning the +underlying model of the tasks. While such model-free methods achieve +considerable performance, they often ignore the structure of task. We present a +natural representation of to Reinforcement Learning (RL) problems using +Recurrent Convolutional Neural Networks (RCNNs), to better exploit this +inherent structure. We define 3 such RCNNs, whose forward passes execute an +efficient Value Iteration, propagate beliefs of state in partially observable +environments, and choose optimal actions respectively. Backpropagating +gradients through these RCNNs allows the system to explicitly learn the +Transition Model and Reward Function associated with the underlying MDP, +serving as an elegant alternative to classical model-based RL. We evaluate the +proposed algorithms in simulation, considering a robot planning problem. We +demonstrate the capability of our framework to reduce the cost of replanning, +learn accurate MDP models, and finally re-plan with learnt models to achieve +near-optimal policies. +" +Evaluating Deep Convolutional Neural Networks for Material Classification," Determining the material category of a surface from an image is a demanding +task in perception that is drawing increasing attention. Following the recent +remarkable results achieved for image classification and object detection +utilising Convolutional Neural Networks (CNNs), we empirically study material +classification of everyday objects employing these techniques. More +specifically, we conduct a rigorous evaluation of how state-of-the art CNN +architectures compare on a common ground over widely used material databases. +Experimental results on three challenging material databases show that the best +performing CNN architectures can achieve up to 94.99\% mean average precision +when classifying materials. +" +Localization of JPEG double compression through multi-domain convolutional neural networks," When an attacker wants to falsify an image, in most of cases she/he will +perform a JPEG recompression. Different techniques have been developed based on +diverse theoretical assumptions but very effective solutions have not been +developed yet. Recently, machine learning based approaches have been started to +appear in the field of image forensics to solve diverse tasks such as +acquisition source identification and forgery detection. In this last case, the +aim ahead would be to get a trained neural network able, given a to-be-checked +image, to reliably localize the forged areas. With this in mind, our paper +proposes a step forward in this direction by analyzing how a single or double +JPEG compression can be revealed and localized using convolutional neural +networks (CNNs). Different kinds of input to the CNN have been taken into +consideration, and various experiments have been carried out trying also to +evidence potential issues to be further investigated. +" +From CDF to PDF --- A Density Estimation Method for High Dimensional Data," CDF2PDF is a method of PDF estimation by approximating CDF. The original idea +of it was previously proposed in [1] called SIC. However, SIC requires +additional hyper-parameter tunning, and no algorithms for computing higher +order derivative from a trained NN are provided in [1]. CDF2PDF improves SIC by +avoiding the time-consuming hyper-parameter tuning part and enabling higher +order derivative computation to be done in polynomial time. Experiments of this +method for one-dimensional data shows promising results. +" +Spectrally-normalized margin bounds for neural networks," This paper presents a margin-based multiclass generalization bound for neural +networks that scales with their margin-normalized ""spectral complexity"": their +Lipschitz constant, meaning the product of the spectral norms of the weight +matrices, times a certain correction factor. This bound is empirically +investigated for a standard AlexNet network trained with SGD on the mnist and +cifar10 datasets, with both original and random labels; the bound, the +Lipschitz constants, and the excess risks are all in direct correlation, +suggesting both that SGD selects predictors whose complexity scales with the +difficulty of the learning task, and secondly that the presented bound is +sensitive to this complexity. +" +Fully Coupled Simulation of Cosmic Reionization. III. Stochastic Early Reionization by the Smallest Galaxies," Previously we identified a new class of early galaxy that we estimate +contributes up to 30\% of the ionizing photons responsible for reionization. +These are low mass halos in the range $M_h =10^{6.5}-10^{8} M_{\odot}$ that +have been chemically enriched by supernova ejecta from prior Pop III star +formation. Despite their low star formation rates, these Metal Cooling halos +(MCs) are significant sources of ionizing radiation, especially at the onset of +reionization, due to their high number density and ionizing escape fractions. +Here we present a fully-coupled radiation hydrodynamic simulation of +reionization that includes these MCs as well the more massive hydrogen atomic +line cooling halos. Our method is novel: we perform halo finding inline with +the radiation hydrodynamical simulation, and assign escaping ionizing fluxes to +halos using a probability distribution function (PDF) measured from the +galaxy-resolving Renaissance Simulations. The PDF captures the mass dependence +of the ionizing escape fraction as well as the probability that a halo is +actively forming stars. With MCs, reionization starts earlier than if only +halos of $10^8 M_{\odot}$ and above are included, however the redshift when +reionization completes is only marginally affected as this is driven by more +massive galaxies. Because star formation is intermittent in MCs, the earliest +phase of reionization exhibits a stochastic nature, with small H II regions +forming and recombining. Only later, once halos of mass $\sim 10^9 M_{\odot}$ +and above begin to dominate the ionizing emissivity, does reionization proceed +smoothly in the usual manner deduced from previous studies. This occurs at +$z\approx 10$ in our simulation. +" +Open Source Dataset and Deep Learning Models for Online Digit Gesture Recognition on Touchscreens," This paper presents an evaluation of deep neural networks for recognition of +digits entered by users on a smartphone touchscreen. A new large dataset of +Arabic numerals was collected for training and evaluation of the network. The +dataset consists of spatial and temporal touch data recorded for 80 digits +entered by 260 users. Two neural network models were investigated. The first +model was a 2D convolutional neural (ConvNet) network applied to bitmaps of the +glpyhs created by interpolation of the sensed screen touches and its topology +is similar to that of previously published models for offline handwriting +recognition from scanned images. The second model used a 1D ConvNet +architecture but was applied to the sequence of polar vectors connecting the +touch points. The models were found to provide accuracies of 98.50% and 95.86%, +respectively. The second model was much simpler, providing a reduction in the +number of parameters from 1,663,370 to 287,690. The dataset has been made +available to the community as an open source resource. +" +On Counting Perfect Matchings in General Graphs," Counting perfect matchings has played a central role in the theory of +counting problems. The permanent, corresponding to bipartite graphs, was shown +to be #P-complete to compute exactly by Valiant (1979), and a fully polynomial +randomized approximation scheme (FPRAS) was presented by Jerrum, Sinclair, and +Vigoda (2004) using a Markov chain Monte Carlo (MCMC) approach. However, it has +remained an open question whether there exists an FPRAS for counting perfect +matchings in general graphs. In fact, it was unresolved whether the same Markov +chain defined by JSV is rapidly mixing in general. In this paper, we show that +it is not. We prove torpid mixing for any weighting scheme on hole patterns in +the JSV chain. As a first step toward overcoming this obstacle, we introduce a +new algorithm for counting matchings based on the Gallai-Edmonds decomposition +of a graph, and give an FPRAS for counting matchings in graphs that are +sufficiently close to bipartite. In particular, we obtain a fixed-parameter +tractable algorithm for counting matchings in general graphs, parameterized by +the greatest ""order"" of a factor-critical subgraph. +" +Density and spin modes in imbalanced normal Fermi gases from collisionless to hydrodynamic regime," We study mass and population imbalance effect on density (in-phase) and spin +(out-of-phase) collective modes in a two-component normal Fermi gas. By +calculating eigenmodes of the linearized Boltzmann equation as well as the +density/spin dynamic structure factor, we show that mass and population +imbalance effects offer a variety of collective mode crossover behaviors from +collisionless to hydrodynamic regimes. The mass imbalance effect shifts the +crossover regime to the higher-temperature, and a significant peak of the spin +dynamic structure factor emerges only in the collisionless regime. This is in +contrast to the case of mass and population balanced normal Fermi gases, where +the spin dynamic response is always absent. Although the population imbalance +effect does not shift the crossover regime, the spin dynamic structure factor +survives both in the collisionless and hydrodynamic regimes. +" +Bounds on parameters of minimally non-linear patterns," Let $ex(n, P)$ be the maximum possible number of ones in any 0-1 matrix of +dimensions $n \times n$ that avoids $P$. Matrix $P$ is called minimally +non-linear if $ex(n, P) = \omega(n)$ but $ex(n, P') = O(n)$ for every strict +subpattern $P'$ of $P$. We prove that the ratio between the length and width of +any minimally non-linear 0-1 matrix is at most $4$, and that a minimally +non-linear 0-1 matrix with $k$ rows has at most $5k-3$ ones. We also obtain an +upper bound on the number of minimally non-linear 0-1 matrices with $k$ rows. +In addition, we prove corresponding bounds for minimally non-linear ordered +graphs. The minimal non-linearity that we investigate for ordered graphs is for +the extremal function $ex_{<}(n, G)$, which is the maximum possible number of +edges in any ordered graph on $n$ vertices with no ordered subgraph isomorphic +to $G$. +" +The sum of multidimensional divisor function over values of quadratic polynomial," Let $F({\bf x})={\bf x}^tQ_m{\bf x}+\mathbf{b}^t{\bf x}+c\in\mathbb{Z}[{\bf +x}]$ be a quadratic polynomial in $\ell (\ge 3 )$ variables ${\bf x} +=(x_{1},...,x_{\ell})$, where $F({\bf x})$ is positive when ${\bf +x}\in\mathbb{R}_{\ge 1}^{\ell}$, $Q_m\in {\rm M}_{\ell}(\mathbb{Z})$ is an +$\ell\times\ell$ matrix and its discriminant $\det\left(Q_m^t+Q_m\right)\neq +0$. It gives explicit asymptotic formulas for the following sum \[ +T_{k,F}(X)=\sum_{{\bf x}\in +[1,X]^{\ell}\cap\mathbb{Z}^{\ell}}\tau_{k}\left(F({\bf x})\right) \] with the +help of the circle method. Here +$\tau_{k}(n)=\#\{(x_1,x_2,...,x_{k})\in\mathbb{N}^{k}: n=x_1x_2...x_{k}\}$ with +$k\in\mathbb{Z}_{\ge 2}$ is the multidimensional divisor function. +" +Blazhko effect in the Galactic bulge fundamental mode RR~Lyrae stars I: Incidence rate and differences between modulated and non-modulated stars," We present the first paper of a series focused on the Blazhko effect in RR +Lyrae type stars pulsating in the fundamental mode, that are located in the +Galactic bulge. A~comprehensive overview about the incidence rate and +light-curve characteristics of the Blazhko stars is given. We analysed 8\,282 +stars having the best quality data in the OGLE-IV survey, and found that at +least $40.3$\,\% of stars show modulation of their light curves. The number of +Blazhko stars we identified is 3\,341, which is the largest sample ever studied +implying the most relevant statistical results currently available. Using +combined data sets with OGLE-III observations, we found that 50\,\% of stars +that show unresolved close peaks to the main component in OGLE-IV are actually +Blazhko stars with extremely long periods. Blazhko stars with modulation occur +preferentially among RR Lyrae stars with shorter pulsation periods in the +Galactic bulge. Fourier amplitude and phase coefficients based on the mean +light curves appear to be substantially lower for Blazhko stars than for stars +with unmodulated light curve in average. We derived new relations for the +compatibility parameter $D_{m}$ in $I$ passband and relations that allow for +differentiating modulated and non-modulated stars easily on the basis of +$R_{31}$, $\phi_{21}$ and $\phi_{31}$. Photometric metallicities, intrinsic +colours and absolute magnitudes computed using empirical relations are the same +for Blazhko and non-modulated stars in the Galactic bulge suggesting no +correlation between the occurrence of the Blazhko effect and these parameters. +" +Radiation damage and thermal shock response of carbon-fiber-reinforced materials to intense high-energy proton beams," A comprehensive study on the effects of energetic protons on carbon-fiber +composites and compounds under consideration for use as low-Z pion production +targets in future high-power accelerators and low-impedance collimating +elements for intercepting TeV-level protons at the Large Hadron Collider has +been undertaken addressing two key areas, namely, thermal shock absorption and +resistance to irradiation damage. +" +Common greedy wiring and rewiring heuristics do not guarantee maximum assortative graphs of given degree," We examine two greedy heuristics - wiring and rewiring - for constructing +maximum assortative graphs over all simple connected graphs with a target +degree sequence. Counterexamples show that natural greedy rewiring heuristics +do not necessarily return a maximum assortative graph, even though it is known +that the meta-graph of all simple connected graphs with given degree is +connected under rewiring. Counterexamples show an elegant greedy graph wiring +heuristic from the literature may fail to achieve the target degree sequence or +may fail to wire a maximally assortative graph. +" +Electromagnetically Induced Transparency with Superradiant and Subradiant states," We construct the electromagnetically induced transparency (EIT) by +dynamically coupling a superradiant state with a subradiant state. The +superradiant and subradiant states with enhanced and inhibited decay rates act +as the excited and metastable states in EIT, respectively. Their energy +difference determined by the distance between the atoms can be measured by the +EIT spectra, which renders this method useful in subwavelength metrology. The +scheme can also be applied to many atoms in nuclear quantum optics, where the +transparency point due to counter-rotating wave terms can be observed. +" +Highly efficient angularly resolving x-ray spectrometer optimized for absorption measurements with collimated sources," Highly collimated betatron radiation from a laser wakefield accelerator is a +promising tool for spectroscopic measurements. Therefore there is a requirement +to create spectrometers suited to the unique properties of such a source. We +demonstrate a spectrometer which achieves an energy resolution of < 5 eV at 9 +keV and is angularly resolving the x-ray emission allowing the reference and +spectrum to be recorded at the same time. The single photon analysis is used to +significantly reduce the background noise. Theoretical performance of various +configurations of the spectrometer is calculated by a ray-tracing algorithm. +The properties and performance of the spectrometer including the angular and +spectral resolution are demonstrated experimentally on absorption above the +K-edge of a Cu foil backlit by laser-produced betatron radiation x-ray beam. +" +Assessing the impact of non-vaccinators: quantifying the average length of infection chains in outbreaks of vaccine-preventable disease," Analytical expressions for the basic reproduction number, R0, have been +obtained in the past for a wide variety of mathematical models for infectious +disease spread, along with expressions for the expected final size of an +outbreak. However, what has so far not been studied is the average number of +infections that descend down the chains of infection begun by each of the +individuals infected in an outbreak (we refer to this quantity as the ""average +number of descendant infections"" per infectious individual, or ANDI). ANDI +includes not only the number of people that an individual directly contacts and +infects, but also the number of people that those go on to infect, and so on +until that particular chain of infection dies out. Quantification of ANDI has +relevance to the vaccination debate, since with ANDI one can calculate the +probability that one or more people are hospitalised (or die) from a disease +down an average chain of infection descending from an infected un-vaccinated +individual. Here we obtain estimates of ANDI using both deterministic and +stochastic modelling formalisms. With both formalisms we find that even for +relatively small community sizes and under most scenarios for R0 and initial +fraction vaccinated, ANDI can be surprisingly large when the effective +reproduction number is >1, leading to high probabilities of adverse outcomes +for one or more people down an average chain of infection in outbreaks of +diseases like measles. +" +A Framework and Comparative Analysis of Control Plane Security of SDN and Conventional Networks," Software defined networking implements the network control plane in an +external entity, rather than in each individual device as in conventional +networks. This architectural difference implies a different design for control +functions necessary for essential network properties, e.g., loop prevention and +link redundancy. We explore how such differences redefine the security +weaknesses in the SDN control plane and provide a framework for comparative +analysis which focuses on essential network properties required by typical +production networks. This enables analysis of how these properties are +delivered by the control planes of SDN and conventional networks, and to +compare security risks and mitigations. Despite the architectural difference, +we find similar, but not identical, exposures in control plane security if both +network paradigms provide the same network properties and are analyzed under +the same threat model. However, defenses vary; SDN cannot depend on edge based +filtering to protect its control plane, while this is arguably the primary +defense in conventional networks. Our concrete security analysis suggests that +a distributed SDN architecture that supports fault tolerance and consistency +checks is important for SDN control plane security. Our analysis methodology +may be of independent interest for future security analysis of SDN and +conventional networks. +" +Relative homological algebra via truncations," To do homological algebra with unbounded chain complexes one needs to first +find a way of constructing resolutions. Spaltenstein solved this problem for +chain complexes of R-modules by truncating further and further to the left, +resolving the pieces, and gluing back the partial resolutions. Our aim is to +give a homotopy theoretical interpretation of this procedure, which may be +extended to a relative setting. We work in an arbitrary abelian category A and +fix a class I of ""injective objects"". We show that Spaltenstein's construction +can be captured by a pair of adjoint functors between unbounded chain complexes +and towers of non-positively graded ones. This pair of adjoint functors forms +what we call a Quillen pair and the above process of truncations, partial +resolutions, and gluing, gives a meaningful way to resolve complexes in a +relative setting up to a split error term. In order to do homotopy theory, and +in particular to construct a well behaved relative derived category D(A; I), we +need more: the split error term must vanish. This is the case when I is the +class of all injective R-modules but not in general, not even for certain +classes of injectives modules over a Noetherian ring. The key property is a +relative analogue of Roos's AB4*-n axiom for abelian categories. Various +concrete examples such as Gorenstein homological algebra and purity are also +discussed. +" +Note: A pairwise form of the Ewald sum for non-neutral systems," Using an example of a mixed discrete-continuum representation of charges +under the periodic boundary condition, we show that the exact pairwise form of +the Ewald sum, which is well-defined even if the system is non-neutral, +provides a natural starting point for deriving unambiguous Coulomb energies +that must remove all spurious dependence on the choice of the Ewald screening +factor. +" +Model selection and model averaging in MACML-estimated MNP models," This paper provides a review of model selection and model averaging methods +for multinomial probit models estimated using the MACML approach. The proposed +approaches are partitioned into test based methods (mostly derived from the +likelihood ratio paradigm), methods based on information criteria and model +averaging methods. +Many of the approaches first have been derived for models estimated using +maximum likelihood and later adapted to the composite marginal likelihood +framework. In this paper all approaches are applied to the MACML approach for +estimation. The investigation lists advantages and disadvantages of the various +methods in terms of asymptotic properties as well as computational aspects. We +find that likelihood-ratio-type tests and information criteria have a spotty +performance when applied to MACML models and instead propose the use of an +empirical likelihood test. +Furthermore, we show that model averaging is easily adaptable to CML +estimation and has promising performance w.r.t to parameter recovery. Finally +model averaging is applied to a real world example in order to demonstrate the +feasibility of the method in real world sized problems. +" +Fuzzy Ontology-Based Sentiment Analysis of Transportation and City Feature Reviews for Safe Traveling," Traffic congestion is rapidly increasing in urban areas, particularly in mega +cities. To date, there exist a few sensor network based systems to address this +problem. However, these techniques are not suitable enough in terms of +monitoring an entire transportation system and delivering emergency services +when needed. These techniques require real-time data and intelligent ways to +quickly determine traffic activity from useful information. In addition, these +existing systems and websites on city transportation and travel rely on rating +scores for different factors (e.g., safety, low crime rate, cleanliness, etc.). +These rating scores are not efficient enough to deliver precise information, +whereas reviews or tweets are significant, because they help travelers and +transportation administrators to know about each aspect of the city. However, +it is difficult for travelers to read, and for transportation systems to +process, all reviews and tweets to obtain expressive sentiments regarding the +needs of the city. The optimum solution for this kind of problem is analyzing +the information available on social network platforms and performing sentiment +analysis. On the other hand, crisp ontology-based frameworks cannot extract +blurred information from tweets and reviews; therefore, they produce inadequate +results. In this regard, this paper proposes fuzzy ontology-based sentiment +analysis and SWRL rule-based decision-making to monitor transportation +activities and to make a city- feature polarity map for travelers. This system +retrieves reviews and tweets related to city features and transportation +activities. The feature opinions are extracted from these retrieved data, and +then fuzzy ontology is used to determine the transportation and city-feature +polarity. A fuzzy ontology and an intelligent system prototype are developed by +using Protégé OWL and Java, respectively. +" +On the Gap Between Strict-Saddles and True Convexity: An Omega(log d) Lower Bound for Eigenvector Approximation," We prove a \emph{query complexity} lower bound on rank-one principal +component analysis (PCA). We consider an oracle model where, given a symmetric +matrix $M \in \mathbb{R}^{d \times d}$, an algorithm is allowed to make $T$ +\emph{exact} queries of the form $w^{(i)} = Mv^{(i)}$ for $i \in +\{1,\dots,T\}$, where $v^{(i)}$ is drawn from a distribution which depends +arbitrarily on the past queries and measurements $\{v^{(j)},w^{(j)}\}_{1 \le j +\le i-1}$. We show that for a small constant $\epsilon$, any adaptive, +randomized algorithm which can find a unit vector $\widehat{v}$ for which +$\widehat{v}^{\top}M\widehat{v} \ge (1-\epsilon)\|M\|$, with even small +probability, must make $T = \Omega(\log d)$ queries. In addition to settling a +widely-held folk conjecture, this bound demonstrates a fundamental gap between +convex optimization and ""strict-saddle"" non-convex optimization of which PCA is +a canonical example: in the former, first-order methods can have dimension-free +iteration complexity, whereas in PCA, the iteration complexity of +gradient-based methods must necessarily grow with the dimension. Our argument +proceeds via a reduction to estimating the rank-one spike in a deformed Wigner +model. We establish lower bounds for this model by developing a ""truncated"" +analogue of the $\chi^2$ Bayes-risk lower bound of Chen et al. +" +A Predictive Approach Using Deep Feature Learning for Electronic Medical Records: A Comparative Study," Massive amount of electronic medical records accumulating from patients and +populations motivates clinicians and data scientists to collaborate for the +advanced analytics to extract knowledge that is essential to address the +extensive personalized insights needed for patients, clinicians, providers, +scientists, and health policy makers. In this paper, we propose a new +predictive approach based on feature representation using deep feature learning +and word embedding techniques. Our method uses different deep architectures for +feature representation in higher-level abstraction to obtain effective and more +robust features from EMRs, and then build prediction models on the top of them. +Our approach is particularly useful when the unlabeled data is abundant whereas +labeled one is scarce. We investigate the performance of representation +learning through a supervised approach. First, we apply our method on a small +dataset related to a specific precision medicine problem, which focuses on +prediction of left ventricular mass indexed to body surface area (LVMI) as an +indicator of heart damage risk in a vulnerable demographic subgroup +(African-Americans). Then we use two large datasets from eICU collaborative +research database to predict the length of stay in Cardiac-ICU and Neuro-ICU +based on high dimensional features. Finally we provide a comparative study and +show that our predictive approach leads to better results in comparison with +others. +" +Gelfand numbers related to structured sparsity and Besov space embeddings with small mixed smoothness," We consider the problem of determining the asymptotic order of the Gelfand +numbers of mixed-(quasi-)norm embeddings $\ell^b_p(\ell^d_q) \hookrightarrow +\ell^b_r(\ell^d_u)$ given that $p \leq r$ and $q \leq u$, with emphasis on +cases with $p\leq 1$ and/or $q\leq 1$. These cases turn out to be related to +structured sparsity. We obtain sharp bounds in a number of interesting +parameter constellations. Our new matching bounds for the Gelfand numbers of +the embeddings of $\ell_1^b(\ell_2^d)$ and $\ell_2^b(\ell_1^d)$ into +$\ell_2^b(\ell_2^d)$ imply optimality assertions for the recovery of +block-sparse and sparse-in-levels vectors, respectively. In addition, we apply +the sharp estimates for $\ell^b_p(\ell^d_q)$-spaces to obtain new two-sided +estimates for the Gelfand numbers of multivariate Besov space embeddings in +regimes of small mixed smoothness. It turns out that in some particular cases +these estimates show the same asymptotic behaviour as in the univariate +situation. In the remaining cases they differ at most by a $\log\log$ factor +from the univariate bound. +" +"Dust attenuation, bulge formation and inside-out cessation of star-formation in Star-Forming Main Sequence galaxies at z~2"," We derive two-dimensional dust attenuation maps at $\sim1~\mathrm{kpc}$ +resolution from the UV continuum for ten galaxies on the $z\sim2$ Star-Forming +Main Sequence (SFMS). Comparison with IR data shows that 9 out of 10 galaxies +do not require further obscuration in addition to the UV-based correction, +though our sample does not include the most heavily obscured, massive galaxies. +The individual rest-frame $V$-band dust attenuation (A$_{\rm V}$) radial +profiles scatter around an average profile that gently decreases from $\sim1.8$ +mag in the center down to $\sim0.6$ mag at $\sim3-4$ half-mass radii. We use +these maps to correct UV- and H$\alpha$-based star-formation rates (SFRs), +which agree with each other. At masses $<10^{11}~M_{\rm sun}$, the +dust-corrected specific SFR (sSFR) profiles are on average radially constant at +a mass-doubling timescale of $\sim300~\mathrm{Myr}$, pointing at a synchronous +growth of bulge and disk components. At masses $>10^{11}~M_{\rm sun}$, the sSFR +profiles are typically centrally-suppressed by a factor of $\sim10$ relative to +the galaxy outskirts. With total central obscuration disfavored, this indicates +that at least a fraction of massive $z\sim2$ SFMS galaxies have started their +inside-out star-formation quenching that will move them to the quenched +sequence. In combination with other observations, galaxies above and below the +ridge of the SFMS relation have respectively centrally-enhanced and +centrally-suppressed sSFRs relative to their outskirts, supporting a picture +where bulges are built due to gas `compaction' that leads to a high central SFR +as galaxies move towards the upper envelope of SFMS. +" +Shannon Shakes Hands with Chernoff: Big Data Viewpoint On Channel Information Measures," Shannon entropy is the most crucial foundation of Information Theory, which +has been proven to be effective in many fields such as communications. Renyi +entropy and Chernoff information are other two popular measures of information +with wide applications. The mutual information is effective to measure the +channel information for the fact that it reflects the relation between output +variables and input variables. In this paper, we reexamine these channel +information measures in big data viewpoint by means of ACE algorithm. The +simulated results show us that decomposition results of Shannon and Chernoff +mutual information with respect to channel parametersare almost the same. In +this sense, Shannon shakes hands with Chernoff since they are different +measures of the same information quantity. We also propose a conjecture that +there is nature of channel information which is only decided by the channel +parameters. +" +A Morse theoretic description of the Goresky-Hingston product," The Goresky-Hingston coproduct was first introduced by D. Sullivan and later +extended by M. Goresky and N. Hingston. In this article we give a Morse +theoretic description of the coproduct. Using the description we prove homotopy +invariance property of the coproduct. We describe a connection between our +Morse theoretic coproduct and a coproduct on Floer homology of cotangent +bundle. +" +SoAx: A generic C++ Structure of Arrays for handling Particles in HPC Codes," The numerical study of physical problems often require integrating the +dynamics of a large number of particles evolving according to a given set of +equations. Particles are characterized by the information they are carrying +such as an identity, a position other. There are generally speaking two +different possibilities for handling particles in high performance computing +(HPC) codes. The concept of an Array of Structures (AoS) is in the spirit of +the object-oriented programming (OOP) paradigm in that the particle information +is implemented as a structure. Here, an object (realization of the structure) +represents one particle and a set of many particles is stored in an array. In +contrast, using the concept of a Structure of Arrays (SoA), a single structure +holds several arrays each representing one property (such as the identity) of +the whole set of particles. +The AoS approach is often implemented in HPC codes due to its handiness and +flexibility. For a class of problems, however, it is know that the performance +of SoA is much better than that of AoS. We confirm this observation for our +particle problem. Using a benchmark we show that on modern Intel Xeon +processors the SoA implementation is typically several times faster than the +AoS one. On Intel's MIC co-processors the performance gap even attains a factor +of ten. The same is true for GPU computing, using both computational and +multi-purpose GPUs. +Combining performance and handiness, we present the library SoAx that has +optimal performance (on CPUs, MICs, and GPUs) while providing the same +handiness as AoS. For this, SoAx uses modern C++ design techniques such +template meta programming that allows to automatically generate code for user +defined heterogeneous data structures. +" +Multi-scenario deep learning for multi-speaker source separation," Research in deep learning for multi-speaker source separation has received a +boost in the last years. However, most studies are restricted to mixtures of a +specific number of speakers, called a specific scenario. While some works +included experiments for different scenarios, research towards combining data +of different scenarios or creating a single model for multiple scenarios have +been very rare. In this work it is shown that data of a specific scenario is +relevant for solving another scenario. Furthermore, it is concluded that a +single model, trained on different scenarios is capable of matching performance +of scenario specific models. +" +Strongly mixed random errors in Mann's iteration algorithm for a contractive real function," This work deals with the Mann's stochastic iteration algorithm under strong +mixing random errors. We establish the Fuk-Nagaev's inequalities that enable us +to prove the almost complete convergence with its corresponding rate of +convergence. Moreover, these inequalities give us the possibility of +constructing a confidence interval for the unique fixed point. Finally, to +check the feasibility and validity of our theoretical results, we consider some +numerical examples, namely a classical example from astronomy. +" +Attribution Modeling Increases Efficiency of Bidding in Display Advertising," Predicting click and conversion probabilities when bidding on ad exchanges is +at the core of the programmatic advertising industry. Two separated lines of +previous works respectively address i) the prediction of user conversion +probability and ii) the attribution of these conversions to advertising events +(such as clicks) after the fact. We argue that attribution modeling improves +the efficiency of the bidding policy in the context of performance advertising. +Firstly we explain the inefficiency of the standard bidding policy with respect +to attribution. Secondly we learn and utilize an attribution model in the +bidder itself and show how it modifies the average bid after a click. Finally +we produce evidence of the effectiveness of the proposed method on both offline +and online experiments with data spanning several weeks of real traffic from +Criteo, a leader in performance advertising. +" +Identifying high betweenness centrality nodes in large social networks," This paper proposes an alternative way to identify nodes with high +betweenness centrality. It introduces a new metric, k-path centrality, and a +randomized algorithm for estimating it, and shows empirically that nodes with +high k-path centrality have high node betweenness centrality. The randomized +algorithm runs in time $O(\kappa^{3}n^{2-2\alpha}\log n)$ and outputs, for each +vertex v, an estimate of its k-path centrality up to additive error of $\pm +n^{1/2+ \alpha}$ with probability $1-1/n^2$. Experimental evaluations on real +and synthetic social networks show improved accuracy in detecting high +betweenness centrality nodes and significantly reduced execution time when +compared with existing randomized algorithms. +" +A functional limit theorem for the sine-process," The main result of this paper is a functional limit theorem for the +sine-process. In particular, we study the limit distribution, in the space of +trajectories, for the number of particles in a growing interval. The +sine-process has the Kolmogorov property and satisfies the Central Limit +Theorem, but our functional limit theorem is very different from the Donsker +Invariance Principle. We show that the time integral of our process can be +approximated by the sum of a linear Gaussian process and independent Gaussian +fluctuations whose covariance matrix is computed explicitly. We interpret these +results in terms of the Gaussian Free Field convergence for the random matrix +models. The proof relies on a general form of the multidimensional Central +Limit Theorem under the sine-process for linear statistics of two types: those +having growing variance and those with bounded variance corresponding to +observables of Sobolev regularity $1/2$. +" +Multi-scale Lipschitz percolation of increasing events for Poisson random walks," Consider the graph induced by $\mathbb{Z}^d$, equipped with uniformly +elliptic random conductances. At time $0$, place a Poisson point process of +particles on $\mathbb{Z}^d$ and let them perform independent simple random +walks. Tessellate the graph into cubes indexed by $i\in\mathbb{Z}^d$ and +tessellate time into intervals indexed by $\tau$. Given a local event +$E(i,\tau)$ that depends only on the particles inside the space time region +given by the cube $i$ and the time interval $\tau$, we prove the existence of a +Lipschitz connected surface of cells $(i,\tau)$ that separates the origin from +infinity on which $E(i,\tau)$ holds. This gives a directly applicable and +robust framework for proving results in this setting that need a multi-scale +argument. For example, this allows us to prove that an infection spreads with +positive speed among the particles. +" +The Planck numbers and the essence of gravitation: phenomenology," We introduce phenomenological understanding of the electromagnetic component +of the physical vacuum, the EM vacuum, as a basic medium for all masses of the +expanding Universe, and ""Casimir polarization"" of this medium arising in the +vicinity of any material object in the Universe as a result of conjugation of +the electric field components of the EM vacuum on both sides (""external"" and +""internal"") of atomic nucleus boundary of the each mass with vacuum. It is +shown that the gravitational attraction of two material objects in accordance +with Newton's law of gravity arises as a result of overlapping of the domains +of the EM vacuum Casimir polarization created by atomic nuclei of the objects, +taking into account the long-range gravitational influence of all masses of the +Universe on each nucleus of these objects (Mach's idea). Newton's law of +gravitational attraction between two bodies is generalized to the case of +gravitational interaction of a system of bodies when the center of mass of the +pair of bodies shifted relative to the center of mass of the system. The unique +smallness of gravitational interactions as compared with the fundamental +nuclear (strong and weak) and electromagnetic ones is determined by the ratio +of the characteristic size of the domain of EM vacuum Casimir polarization in +the vicinity of atomic nuclei to the Hubble radius of the Universe. +" +Low-Dimensional Spatial Embedding Method for Shape Uncertainty Quantification in Acoustic Scattering," This paper introduces a novel boundary integral approach of shape uncertainty +quantification for the Helmholtz scattering problem in the framework of the +so-called parametric method. The key idea is to construct an integration grid +whose associated weight function encompasses the irregularities and +nonsmoothness imposed by the random boundary. Thus, the solution can be +evaluated accurately with relatively low number of grid points. The integration +grid is obtained by employing a low-dimensional spatial embedding using the +coarea formula. The proposed method can handle large variation as well as +non-smoothness of the random boundary. For the ease of presentation the theory +is restricted to star-shaped obstacles in low-dimensional setting. Higher +spatial and parametric dimensional cases are discussed, though, not extensively +explored in the current study. +" +Updates on the background estimates for the X-IFU instrument onboard of the ATHENA mission," ATHENA, with a launch foreseen in 2028 towards the L2 orbit, addresses the +science theme ""The Hot and Energetic Universe"", coupling a high-performance +X-ray Telescope with two complementary focal-plane instruments. One of these, +the X-ray Integral Field Unit (X-IFU) is a TES based kilo-pixel array providing +spatially resolved high-resolution spectroscopy (2.5 eV at 6 keV) over a 5 +arcmin FoV. The background for this kind of detectors accounts for several +components: the diffuse Cosmic X-ray Background, the low energy particles +(<~100 keV) focalized by the mirrors and reaching the detector from inside the +field of view, and the high energy particles (>~100 MeV) crossing the +spacecraft and reaching the focal plane from every direction. Each one of these +components is under study to reduce their impact on the instrumental +performances. This task is particularly challenging, given the lack of data on +the background of X-ray detectors in L2, the uncertainties on the particle +environment to be expected in such orbit, and the reliability of the models +used in the Monte Carlo background computations. As a consequence, the +activities addressed by the group range from the reanalysis of the data of +previous missions like XMM-Newton, to the characterization of the L2 +environment by data analysis of the particle monitors onboard of satellites +present in the Earth magnetotail, to the characterization of solar events and +their occurrence, and to the validation of the physical models involved in the +Monte Carlo simulations. All these activities will allow to develop a set of +reliable simulations to predict, analyze and find effective solutions to reduce +the particle background experienced by the X-IFU, ultimately satisfying the +scientific requirement that enables the science of ATHENA. While the activities +are still ongoing, we present here some preliminary results already obtained by +the group. +" +Galaxy Zoo: star-formation versus spiral arm number," Spiral arms are common features in low-redshift disc galaxies, and are +prominent sites of star-formation and dust obscuration. However, spiral +structure can take many forms: from galaxies displaying two strong `grand +design' arms, to those with many `flocculent' arms. We investigate how these +different arm types are related to a galaxy's star-formation and gas properties +by making use of visual spiral arm number measurements from Galaxy Zoo 2. We +combine UV and mid-IR photometry from GALEX and WISE to measure the rates and +relative fractions of obscured and unobscured star formation in a sample of +low-redshift SDSS spirals. Total star formation rate has little dependence on +spiral arm multiplicity, but two-armed spirals convert their gas to stars more +efficiently. We find significant differences in the fraction of obscured +star-formation: an additional $\sim 10$ per cent of star-formation in two-armed +galaxies is identified via mid-IR dust emission, compared to that in many-armed +galaxies. The latter are also significantly offset below the IRX-$\beta$ +relation for low-redshift star-forming galaxies. We present several +explanations for these differences versus arm number: variations in the spatial +distribution, sizes or clearing timescales of star-forming regions (i.e., +molecular clouds), or contrasting recent star-formation histories. +" +Dense Associative Memory is Robust to Adversarial Inputs," Deep neural networks (DNN) trained in a supervised way suffer from two known +problems. First, the minima of the objective function used in learning +correspond to data points (also known as rubbish examples or fooling images) +that lack semantic similarity with the training data. Second, a clean input can +be changed by a small, and often imperceptible for human vision, perturbation, +so that the resulting deformed input is misclassified by the network. These +findings emphasize the differences between the ways DNN and humans classify +patterns, and raise a question of designing learning algorithms that more +accurately mimic human perception compared to the existing methods. +Our paper examines these questions within the framework of Dense Associative +Memory (DAM) models. These models are defined by the energy function, with +higher order (higher than quadratic) interactions between the neurons. We show +that in the limit when the power of the interaction vertex in the energy +function is sufficiently large, these models have the following three +properties. First, the minima of the objective function are free from rubbish +images, so that each minimum is a semantically meaningful pattern. Second, +artificial patterns poised precisely at the decision boundary look ambiguous to +human subjects and share aspects of both classes that are separated by that +decision boundary. Third, adversarial images constructed by models with small +power of the interaction vertex, which are equivalent to DNN with rectified +linear units (ReLU), fail to transfer to and fool the models with higher order +interactions. This opens up a possibility to use higher order models for +detecting and stopping malicious adversarial attacks. The presented results +suggest that DAM with higher order energy functions are closer to human visual +perception than DNN with ReLUs. +" +The Nature of Turbulence in the LITTLE THINGS Dwarf Irregular Galaxies," We present probability density functions and higher order (skewness and +kurtosis) analyses of the galaxy-wide and spatially-resolved HI column density +distributions in the LITTLE THINGS sample of dwarf irregular galaxies. This +analysis follows that of Burkhart et al. (2010) for the Small Magellanic Cloud. +About 60% of our sample have galaxy-wide values of kurtosis that are similar to +that found for the Small Magellanic Cloud, with a range up to much higher +values, and kurtosis increases with integrated star formation rate. Kurtosis +and skewness were calculated for radial annuli and for a grid of 32 pixel X 32 +pixel kernels across each galaxy. For most galaxies, kurtosis correlates with +skewness. For about half of the galaxies, there is a trend of increasing +kurtosis with radius. The range of kurtosis and skewness values is modeled by +small variations in the Mach number close to the sonic limit and by conversion +of HI to molecules at high column density. The maximum HI column densities +decrease with increasing radius in a way that suggests molecules are forming in +the weak field limit, where H_2 formation balances photodissociation in +optically thin gas at the edges of clouds. +" +Degeneration of Trigonal Curves and Solutions of the KP-Hierarchy," It is known that soliton solutions of the KP-hierarchy corresponds to +singular rational curves with only ordinary double points. In this paper we +study the degeneration of theta function solutions corresponding to certain +trigonal curves. We show that, when the curves degenerate to singular rational +curves with only ordinary triple points, the solutions tend to some +intermediate solutions between solitons and rational solutions. They are +considered as cerain limits of solitons. The Sato Grassmannian is extensively +used here to study the degeneration of solutions, since it directly connects +solutions of the KP-hierarchy to the defining equations of algebraic curves.We +define a class of solutions in the Wronskian form which contains soliton +solutions as a subclass and prove that, using the Sato Grassmannian, the +degenerate trigonal solutions are connected to those solutions by certain gauge +transformations +" +Blow-up solutions for a Kirchhoff type elliptic equation with trapping potential," We study a generalized Kirchhoff type equation with trapping potential. The +existence and blow-up behavior of solutions with normalized L2-norm for this +problem are discussed. +" +Representing Sentences as Low-Rank Subspaces," Sentences are important semantic units of natural language. A generic, +distributional representation of sentences that can capture the latent +semantics is beneficial to multiple downstream applications. We observe a +simple geometry of sentences -- the word representations of a given sentence +(on average 10.23 words in all SemEval datasets with a standard deviation 4.84) +roughly lie in a low-rank subspace (roughly, rank 4). Motivated by this +observation, we represent a sentence by the low-rank subspace spanned by its +word vectors. Such an unsupervised representation is empirically validated via +semantic textual similarity tasks on 19 different datasets, where it +outperforms the sophisticated neural network models, including skip-thought +vectors, by 15% on average. +" +Kinetic Energy Density Functionals by Axiomatic Approach," An axiomatic approach is herein used to determine the physically acceptable +forms for general $D$-dimensional kinetic energy density functionals (KEDF). +The resulted expansion captures most of the known forms of one-point KEDFs. By +statistically training the KEDF forms on a model problem of non-interacting +kinetic energy in 1D (6 terms only), the mean relative accuracy for 1000 +randomly generated potentials is found to be better than the standard KEDF by +several orders of magnitudes. The accuracy improves with the number of occupied +states and was found to be better than $10^{-4}$ for a system with four +occupied states. Furthermore, we show that free fitting of the coefficients +associated with known KEDFs approaches the exactly analytic values. The +presented approach can open a new route to search for physically acceptable +kinetic energy density functionals and provide an essential step towards more +accurate large-scale orbital free density functional theory calculations. +" +Locating Power Flow Solution Space Boundaries: A Numerical Polynomial Homotopy Approach," The solution space of any set of power flow equations may contain different +number of real-valued solutions. The boundaries that separate these regions are +referred to as power flow solution space boundaries. Knowledge of these +boundaries is important as they provide a measure for voltage stability. +Traditionally, continuation based methods have been employed to compute these +boundaries on the basis of initial guesses for the solution. However, with +rapid growth of renewable energy sources these boundaries will be increasingly +affected by variable parameters such as penetration levels, locations of the +renewable sources, and voltage set-points, making it difficult to generate an +initial guess that can guarantee all feasible solutions for the power flow +problem. In this paper we solve this problem by applying a numerical polynomial +homotopy based continuation method. The proposed method guarantees to find all +solution boundaries within a given parameter space up to a chosen level of +discretization, independent of any initial guess. Power system operators can +use this computational tool conveniently to plan the penetration levels of +renewable sources at different buses. We illustrate the proposed method through +simulations on 3-bus and 10-bus power system examples with renewable +generation. +" +First principles calculations of the interface properties of amorphous-Al2O3/MoS2 under non-strain and biaxial strain conditions," Al2O3 is a potential dielectric material for metal-oxide-semiconductor (MOS) +devices. Al2O3 films deposited on semiconductors usually exhibit amorphous due +to lattice mismatch. Compared to two-dimensional graphene, MoS2 is a typical +semiconductor, therefore, it has more extensive application. The +amorphous-Al2O3/MoS2 (a-Al2O3/MoS2) interface has attracted people's attention +because of its unique properties. In this paper, the interface behaviors of +a-Al2O3/MoS2 under non-strain and biaxial strain are investigated by first +principles calculations based on density functional theory (DFT). First of all, +the generation process of a-Al2O3 sample is described, which is calculated by +molecular dynamics and geometric optimization. Then, we introduce the band +alignment method, and calculate band offset of a-Al2O3/MoS2 interface. It is +found that the valence band offset (VBO) and conduction band offset (CBO) +change with the number of MoS2 layers. The dependence of leakage current on the +band offset is also illustrated. At last, the band structure of monolayer MoS2 +under biaxial strain is discussed. The biaxial strain is set in the range from +-6% to 6% with the interval of 2%. Impact of the biaxial strain on the band +alignment is investigated. +" +On cohomological Hall algebras of quivers : Yangians," We consider the cohomological Hall algebra Y of a Lagrangian substack of the +moduli stack of representations of the preprojective algebra of an arbitrary +quiver Q, and its actions on the cohomology of quiver varieties. We conjecture +that Y is equal, after a suitable extension of scalars, to the Yangian +introduced by Maulik and Okounkov, and we construct an embedding of Y in the +Yangian, intertwining the respective actions of both algebras on the cohomology +of quiver varieties. +" +Are Continuum Predictions of Clustering Chaotic?," Gas-solid multiphase flows are prone to develop an instability known as +clustering. Two-fluid models, which treat the particulate phase as a continuum, +are known to reproduce the qualitative features of this instability, producing +highly-dynamic, spatiotemporal patterns. However, it is unknown whether such +simulations are truly aperiodic or a type of complex periodic behavior. By +showing that the system possesses a sensitive dependence on initial conditions +and a positive largest Lyapunov exponent, $\lambda_1 \approx 1/\tau$, we +provide a tentative answer: continuum predictions of clustering are chaotic. We +further demonstrate that the chaotic behavior is dimensionally dependent, a +conclusion which unifies previous results and strongly suggests that the +chaotic behavior is not a result of the fundamental kinematic instability, but +of the secondary (inherently multidimensional) instability. +" +Easing Tensions with Quartessence," Tensions between cosmic microwave background (CMB) observations and the +growth of the large-scale structure (LSS) inferred from late-time probes pose a +serious challenge to the concordance $\Lambda$CDM cosmological model. +State-of-the-art CMB data from the Planck satellite predicts a higher rate of +structure growth than what preferred by low-redshift observables. Such tension +has hitherto eluded conclusive explanations in terms of straightforward +modifications to $\Lambda$CDM, e.g. the inclusion of massive neutrinos or a +dynamical dark energy component. Here, we investigate `quartessence' models, +where a single dark component mimics both dark matter and dark energy. We show +that such models greatly alleviate the tension between high and low redshift +observations, thanks to the non-vanishing sound speed of quartessence that +inhibits structure growth at late times on scales smaller than its +corresponding Jeans' length. In particular, the $3.4\sigma$ tension between CMB +and LSS observables is thoroughly reabsorbed. For this reason, we argue that +quartessence deserves further investigation and may lead to a deeper +understanding of the physics of the dark Universe. +" +Stable high-pressure phases in the H-S system determined by chemically reacting hydrogen and sulfur," Synchrotron X-ray diffraction and Raman spectroscopy have been used to study +chemical reactions of molecular hydrogen (H2) with sulfur (S) at high +pressures. We find theoretically predicted Cccm and Im-3m H3S to be the +reaction products at 50 and 140 GPa, respectively. Im-3m H3S is a stable +crystalline phase above 140 GPa and it transforms to R3m H3S on pressure +release below 140 GPa. The latter phase is (meta)stable down to at least 70 GPa +where it transforms to Cccm H3S upon annealing (T<1300 K) to overcome the +kinetic hindrance. Cccm H3S has an extended structure with symmetric hydrogen +bonds at 50 GPa and upon decompression it experiences a transformation to a +molecular mixed H2S-H2 structure below 40 GPa without any apparent change in +the crystal symmetry. +" +Learning Neurosymbolic Generative Models via Program Synthesis," Significant strides have been made toward designing better generative models +in recent years. Despite this progress, however, state-of-the-art approaches +are still largely unable to capture complex global structure in data. For +example, images of buildings typically contain spatial patterns such as windows +repeating at regular intervals; state-of-the-art generative methods can't +easily reproduce these structures. We propose to address this problem by +incorporating programs representing global structure into the generative +model---e.g., a 2D for-loop may represent a configuration of windows. +Furthermore, we propose a framework for learning these models by leveraging +program synthesis to generate training data. On both synthetic and real-world +data, we demonstrate that our approach is substantially better than the +state-of-the-art at both generating and completing images that contain global +structure. +" +Collapsibility of marginal models for categorical data," We consider marginal log-linear models for parameterizing distributions on +multidimensional contingency tables. These models generalize ordinary +log-linear and multivariate logistic models, besides several others. First, we +obtain some characteristic properties of marginal log-linear parameters. Then +we define collapsibility and strict collapsibility of these parameters in a +general sense. Several necessary and sufficient conditions for collapsibility +and strict collapsibility are derived using the technique of Möbius +inversion. These include results for an arbitrary set of marginal log-linear +parameters having some common effects. The connections of collapsibility and +strict collapsibility to various forms of independence of the variables are +discussed. Finally, we establish a result on the relationship between +parameters with the same effect but different margins, and use it to +demonstrate smoothness of marginal log-linear models under collapsibility +conditions thereby obtaining a curved exponential family. +" +Tunable viscosity modification with diluted particles: When particles decrease the viscosity of complex fluids," While spherical particles are the most studied viscosity modifiers, they are +well known only to increase viscosities, in particular at low concentrations. +Extended studies and theories on non-spherical particles find a more +complicated behavior, but still a steady increase. Involving platelets in +combination with complex fluids displays an even more complex scenario that we +analyze experimentally and theoretically as a function of platelet diameter, to +find the underlying concepts. Using a broad toolbox of different techniques we +were able to decrease the viscosity of crude oils although solid particles were +added. This apparent contradiction could lead to a wider range of applications. +" +Research Portfolio Analysis and Topic Prominence," Stakeholders in the science system need to decide where to place their bets. +Example questions include: Which areas of research should get more funding? Who +should we hire? Which projects should we abandon and which new projects should +we start? Making informed choices requires knowledge about these research +options. Unfortunately, to date research portfolio options have not been +defined in a consistent, transparent and relevant manner. Furthermore, we don't +know how to define demand for these options. In this article, we address the +issues of consistency, transparency, relevance and demand by using a model of +science consisting of 91,726 topics (or research options) that contain over 58 +million documents. We present a new indicator of topic prominence - a measure +of visibility, momentum and, ultimately, demand. We assign over $203 billion of +project-level funding data from STAR METRICS to individual topics in science, +and show that the indicator of topic prominence, explains over one-third of the +variance in current (or future) funding by topic. We also show that highly +prominent topics receive far more funding per researcher than topics that are +not prominent. Implications of these results for research planning and +portfolio analysis by institutions and researchers are emphasized. +" +Un Crit{È}Re Simple," In this short note, we mimic the proof of the simplicity of the theory ACFA +of generic difference fields in order to provide a criterion, valid for certain +theories of pure fields and fields equipped with operators, which shows that a +complete theory is simple whenever its definable and algebraic closures are +controlled by an underlying stable theory. +" +Orbits of irreducible binary forms over GF$(p)$," In this note I give a formula for calculating the number of orbits of +irreducible binary forms of degree $n$ over GF$(p)$ under the action of +GL$(2,p)$. This formula has applications to the classification of class two +groups of exponent $p$ with derived groups of order $p^2$. +" +A Survey of Entorhinal Grid Cell Properties," About a decade ago grid cells were discovered in the medial entorhinal cortex +of rat. Their peculiar firing patterns, which correlate with periodic locations +in the environment, led to early hypothesis that grid cells may provide some +form of metric for space. Subsequent research has since uncovered a wealth of +new insights into the characteristics of grid cells and their neural +neighborhood, the parahippocampal-hippocampal region, calling for a revision +and refinement of earlier grid cell models. This survey paper aims to provide a +comprehensive summary of grid cell research published in the past decade. It +focuses on the functional characteristics of grid cells such as the influence +of external cues or the alignment to environmental geometry, but also provides +a basic overview of the underlying neural substrate. +" +An Automata-based Abstract Semantics for String Manipulation Languages," In recent years, dynamic languages, such as JavaScript or Python, have faced +an important increment of usage in a wide range of fields and applications. +Their tricky and misunderstood behaviors pose a hard challenge for static +analysis of these programming languages. A key aspect of any dynamic language +program is the multiple usage of strings, since they can be implicitly +converted to another type value, transformed by string-to-code primitives or +used to access an object-property. Unfortunately, string analyses for dynamic +languages still lack of precision and do not take into account some important +string features. Moreover, string obfuscation is very popular in the context of +dynamic language malicious code, for example, to hide code information inside +strings and then to dynamically transform strings into executable code. In this +scenario, more precise string analyses become a necessity. This paper proposes +a new semantics for string analysis placing a first step for handling dynamic +languages string features. +" +Context encoders as a simple but powerful extension of word2vec," With a simple architecture and the ability to learn meaningful word +embeddings efficiently from texts containing billions of words, word2vec +remains one of the most popular neural language models used today. However, as +only a single embedding is learned for every word in the vocabulary, the model +fails to optimally represent words with multiple meanings. Additionally, it is +not possible to create embeddings for new (out-of-vocabulary) words on the +spot. Based on an intuitive interpretation of the continuous bag-of-words +(CBOW) word2vec model's negative sampling training objective in terms of +predicting context based similarities, we motivate an extension of the model we +call context encoders (ConEc). By multiplying the matrix of trained word2vec +embeddings with a word's average context vector, out-of-vocabulary (OOV) +embeddings and representations for a word with multiple meanings can be created +based on the word's local contexts. The benefits of this approach are +illustrated by using these word embeddings as features in the CoNLL 2003 named +entity recognition (NER) task. +" +Sandwiches Missing Two Ingredients of Order Four," For a set ${\cal F}$ of graphs, an instance of the ${\cal F}$-{\sc free +Sandwich Problem} is a pair $(G_1,G_2)$ consisting of two graphs $G_1$ and +$G_2$ with the same vertex set such that $G_1$ is a subgraph of $G_2$, and the +task is to determine an ${\cal F}$-free graph $G$ containing $G_1$ and +contained in $G_2$, or to decide that such a graph does not exist. Initially +motivated by the graph sandwich problem for trivially perfect graphs, which are +the $\{ P_4,C_4\}$-free graphs, we study the complexity of the ${\cal F}$-{\sc +free Sandwich Problem} for sets ${\cal F}$ containing two non-isomorphic graphs +of order four. We show that if ${\cal F}$ is one of the sets $\left\{ {\rm +diamond},K_4\right\}$, $\left\{ {\rm diamond},C_4\right\}$, $\left\{ {\rm +diamond},{\rm paw}\right\}$, $\left\{ K_4,\overline{K_4}\right\}$, $\left\{ +P_4,C_4\right\}$, $\left\{ P_4,\overline{\rm claw}\right\}$, $\left\{ +P_4,\overline{\rm paw}\right\}$, $\left\{ P_4,\overline{\rm diamond}\right\}$, +$\left\{ {\rm paw},C_4\right\}$, $\left\{ {\rm paw},{\rm claw}\right\}$, +$\left\{ {\rm paw},\overline{\rm claw}\right\}$, $\left\{ {\rm +paw},\overline{\rm paw}\right\}$, $\left\{ C_4,\overline{C_4}\right\}$, +$\left\{ {\rm claw},\overline{\rm claw}\right\}$, and $\left\{ {\rm +claw},\overline{C_4}\right\}$, then the ${\cal F}$-{\sc free Sandwich Problem} +can be solved in polynomial time, and, if ${\cal F}$ is one of the sets +$\left\{ C_4,K_4\right\}$, $\left\{ {\rm paw},K_4\right\}$, $\left\{ {\rm +paw},\overline{K_4}\right\}$, $\left\{ {\rm paw},\overline{C_4}\right\}$, +$\left\{ {\rm diamond},\overline{C_4}\right\}$, $\left\{ {\rm +paw},\overline{\rm diamond}\right\}$, and $\left\{ {\rm diamond},\overline{\rm +diamond}\right\}$, then the decision version of the ${\cal F}$-{\sc free +Sandwich Problem} is NP-complete. +" +Butterfly velocity and bulk causal structure," The butterfly velocity was recently proposed as a characteristic velocity of +chaos propagation in a local system. Compared to the Lieb-Robinson velocity +that bounds the propagation speed of all perturbations, the butterfly velocity, +studied in thermal ensembles, is an ""effective"" Lieb-Robinson velocity for a +subspace of the Hilbert space defined by the microcanonical ensemble. In this +paper, we generalize the concept of butterfly velocity beyond the thermal case +to a large class of other subspaces. Based on holographic duality, we consider +the code subspace of low energy excitations on a classical background geometry. +Using local reconstruction of bulk operators, we prove a general relation +between the boundary butterfly velocities (of different operators) and the bulk +causal structure. Our result has implications in both directions of the +bulk-boundary correspondence. Starting from a boundary theory with a given +Lieb-Robinson velocity, our result determines an upper bound of the bulk light +cone starting from a given point. Starting from a bulk space-time geometry, the +butterfly velocity can be explicitly calculated for all operators that are the +local reconstructions of bulk local operators. If the bulk geometry satisfies +Einstein equation and the null energy condition, for rotation symmetric +geometries we prove that infrared operators always have a slower butterfly +velocity that the ultraviolet one. For asymptotic AdS geometries, this also +implies that the butterfly velocities of all operators are upper bounded by the +speed of light. We further prove that the butterfly velocity is equal to the +speed of light if the causal wedge of the boundary region coincides with its +entanglement wedge. Finally, we discuss the implication of our result to +geometries that are not asymptotically AdS, and in particular, obtain +constraints that must be satisfied by a dual theory of flat space gravity. +" +Correlated atomic wires on substrates. II. Application to Hubbard wires," In the first part of our theoretical study of correlated atomic wires on +substrates, we introduced lattice models for a one-dimensional quantum wire on +a three-dimensional substrate and their approximation by quasi-one-dimensional +effective ladder models [arXiv:1704.07350]. In this second part, we apply this +approach to the case of a correlated wire with a Hubbard-type electron-electron +repulsion deposited on an insulating substrate. The ground-state and spectral +properties are investigated numerically using the density-matrix +renormalization group method and quantum Monte Carlo simulations. As a function +of the model parameters, we observe various phases with quasi-one-dimensional +low-energy excitations localized in the wire, namely paramagnetic Mott +insulators, Luttinger liquids, and spin-$1/2$ Heisenberg chains. The validity +of the effective ladder models is assessed by studying the convergence with the +number of legs and comparing to the full three-dimensional model. We find that +narrow ladder models accurately reproduce the quasi-one-dimensional excitations +of the full three-dimensional model but predict only qualitatively whether +excitations are localized around the wire or delocalized in the +three-dimensional substrate. +" +Distributed Bayesian Piecewise Sparse Linear Models," The importance of interpretability of machine learning models has been +increasing due to emerging enterprise predictive analytics, threat of data +privacy, accountability of artificial intelligence in society, and so on. +Piecewise linear models have been actively studied to achieve both accuracy and +interpretability. They often produce competitive accuracy against +state-of-the-art non-linear methods. In addition, their representations (i.e., +rule-based segmentation plus sparse linear formula) are often preferred by +domain experts. A disadvantage of such models, however, is high computational +cost for simultaneous determinations of the number of ""pieces"" and cardinality +of each linear predictor, which has restricted their applicability to +middle-scale data sets. This paper proposes a distributed factorized asymptotic +Bayesian (FAB) inference of learning piece-wise sparse linear models on +distributed memory architectures. The distributed FAB inference solves the +simultaneous model selection issue without communicating $O(N)$ data where N is +the number of training samples and achieves linear scale-out against the number +of CPU cores. Experimental results demonstrate that the distributed FAB +inference achieves high prediction accuracy and performance scalability with +both synthetic and benchmark data. +" +Mercury's magnetic field in the MESSENGER era," MESSENGER magnetometer data show that Mercury's magnetic field is not only +exceptionally weak but also has a unique geometry. The internal field resembles +an axial dipole that is offset to the North by 20% of the planetary radius. +This implies that the axial quadrupol is particularly strong while the dipole +tilt is likely below 0.8 degree. The close proximity to the sun in combination +with the weak internal field results in a very small and highly dynamic Hermean +magnetosphere. We review the current understanding of Mercury's internal and +external magnetic field and discuss possible explanations. Classical convection +driven core dynamos have a hard time to reproduce the observations. Strong +quadrupol contributions can be promoted by different measures, but they always +go along with a large dipole tilt and generally rather small scale fields. A +stably stratified outer core region seems required to explain not only the +particular geometry but also the weakness of the Hermean magnetic field. New +interior models suggest that Mercury's core likely hosts an iron snow zone +underneath the core-mantle boundary. The positive radial sulfur gradient likely +to develop in such a zone would indeed promote stable stratification. However, +even dynamo models that include the stable layer show Mercury-like magnetic +fields only for a fraction of the total simulation time. Large scale variations +in the core-mantle boundary heat flux promise to yield more persistent results +but are not compatible with the current understanding of Mercury's lower +mantle. +" +A two-stage approach for estimating the parameters of an age-group epidemic model from incidence data," Age-dependent dynamics is an important characteristic of many infectious +diseases. Age-group epidemic models describe the infection dynamics in +different age-groups by allowing to set distinct parameter values for each. +However, such models are highly nonlinear and may have a large number of +unknown parameters. Thus, parameter estimation of age-group models, while +becoming a fundamental issue for both the scientific study and policy making in +infectious diseases, is not a trivial task in practice. In this paper, we +examine the estimation of the so called next-generation matrix using incidence +data of a single entire outbreak, and extend the approach to deal with +recurring outbreaks. Unlike previous studies, we do not assume any constraints +regarding the structure of the matrix. A novel two-stage approach is developed, +which allows for efficient parameter estimation from both statistical and +computational perspectives. Simulation studies corroborate the ability to +estimate accurately the parameters of the model for several realistic +scenarios. The model and estimation method are applied to real data of +influenza-like-illness in Israel. The parameter estimates of the key relevant +epidemiological parameters and the recovered structure of the estimated +next-generation matrix are in line with results obtained in previous studies. +" +Sequential Double Robustness in Right-Censored Longitudinal Models," Consider estimating the G-formula for the counterfactual mean outcome under a +given treatment regime in a longitudinal study. Bang and Robins provided an +estimator for this quantity that relies on a sequential regression formulation +of this parameter. This approach is doubly robust in that it is consistent if +either the outcome regressions or the treatment mechanisms are consistently +estimated. We define a stronger notion of double robustness, termed sequential +double robustness, for estimators of the longitudinal G-formula. The definition +emerges naturally from a more general definition of sequential double +robustness for the outcome regression estimators. An outcome regression +estimator is sequentially doubly robust (SDR) if, at each subsequent time +point, either the outcome regression or the treatment mechanism is consistently +estimated. This form of robustness is exactly what one would anticipate is +attainable by studying the remainder term of a first-order expansion of the +G-formula parameter. We show that a particular implementation of an existing +procedure is SDR. We also introduce a novel SDR estimator, whose development +involves a novel translation of ideas used in targeted minimum loss-based +estimation to the infinite-dimensional setting. +" +On the treewidth of triangulated 3-manifolds," In graph theory, as well as in 3-manifold topology, there exist several +width-type parameters to describe how ""simple"" or ""thin"" a given graph or +3-manifold is. These parameters, such as pathwidth or treewidth for graphs, or +the concept of thin position for 3-manifolds, play an important role when +studying algorithmic problems; in particular, there is a variety of problems in +computational 3-manifold topology - some of them known to be computationally +hard in general - that become solvable in polynomial time as soon as the dual +graph of the input triangulation has bounded treewidth. +In view of these algorithmic results, it is natural to ask whether every +3-manifold admits a triangulation of bounded treewidth. We show that this is +not the case, i.e., that there exists an infinite family of closed 3-manifolds +not admitting triangulations of bounded pathwidth or treewidth (the latter +implies the former, but we present two separate proofs). +We derive these results from work of Agol, of Scharlemann and Thompson, and +of Scharlemann, Schultens and Saito by exhibiting explicit connections between +the topology of a 3-manifold M on the one hand and width-type parameters of the +dual graphs of triangulations of M on the other hand, answering a question that +had been raised repeatedly by researchers in computational 3-manifold topology. +In particular, we show that if a closed, orientable, irreducible, non-Haken +3-manifold M has a triangulation of treewidth (resp. pathwidth) k then the +Heegaard genus of M is at most 24(k+1) (resp. 4(3k+1)). +" +The Geometry of Large Tundra Lakes Observed in Historical Maps and Satellite Images," Tundra lakes are key components of the Arctic climate system because they +represent a source of methane to the atmosphere. In this paper, we aim to +analyze the geometry of the patterns formed by large ($>0.8$ km$^2$) tundra +lakes in the Russian High Arctic. We have studied images of tundra lakes in +historical maps from the State Hydrological Institute, Russia (date 1977; scale +$0.21166$ km/pixel) and in Landsat satellite images derived from the Google +Earth Engine (G.E.E.; date 2016; scale $0.1503$ km/pixel). The G.E.E. is a +cloud-based platform for planetary-scale geospatial analysis on over four +decades of Landsat data. We developed an image-processing algorithm to segment +these maps and images, measure the area and perimeter of each lake, and compute +the fractal dimension of the lakes in the images we have studied. Our results +indicate that as lake size increases, their fractal dimension bifurcates. For +lakes observed in historical maps, this bifurcation occurs among lakes larger +than $100$ km$^2$ (fractal dimension $1.43$ to $1.87$). For lakes observed in +satellite images this bifurcation occurs among lakes larger than $\sim$100 +km$^2$ (fractal dimension $1.31$ to $1.95$). Tundra lakes with a fractal +dimension close to $2$ have a tendency to be self-similar with respect to their +area--perimeter relationships. Area--perimeter measurements indicate that lakes +with a length scale greater than $70$ km$^2$ are power-law distributed. +Preliminary analysis of changes in lake size over time in paired lakes (lakes +that were visually matched in both the historical map and the satellite +imagery) indicate that some lakes in our study region have increased in size +over time, whereas others have decreased in size over time. Lake size change +during this 39-year time interval can be up to half the size of the lake as +recorded in the historical map. +" +Deep-Learning Convolutional Neural Networks for scattered shrub detection with Google Earth Imagery," There is a growing demand for accurate high-resolution land cover maps in +many fields, e.g., in land-use planning and biodiversity conservation. +Developing such maps has been performed using Object-Based Image Analysis +(OBIA) methods, which usually reach good accuracies, but require a high human +supervision and the best configuration for one image can hardly be extrapolated +to a different image. Recently, the deep learning Convolutional Neural Networks +(CNNs) have shown outstanding results in object recognition in the field of +computer vision. However, they have not been fully explored yet in land cover +mapping for detecting species of high biodiversity conservation interest. This +paper analyzes the potential of CNNs-based methods for plant species detection +using free high-resolution Google Earth T M images and provides an objective +comparison with the state-of-the-art OBIA-methods. We consider as case study +the detection of Ziziphus lotus shrubs, which are protected as a priority +habitat under the European Union Habitats Directive. According to our results, +compared to OBIA-based methods, the proposed CNN-based detection model, in +combination with data-augmentation, transfer learning and pre-processing, +achieves higher performance with less human intervention and the knowledge it +acquires in the first image can be transferred to other images, which makes the +detection process very fast. The provided methodology can be systematically +reproduced for other species detection. +" +Tight Semi-Nonnegative Matrix Factorization," The nonnegative matrix factorization is a widely used, flexible matrix +decomposition, finding applications in biology, image and signal processing and +information retrieval, among other areas. Here we present a related matrix +factorization. A multi-objective optimization problem finds conical +combinations of templates that approximate a given data matrix. The templates +are chosen so that as far as possible only the initial data set can be +represented this way. However, the templates are not required to be nonnegative +nor convex combinations of the original data. +" +Low Precision RNNs: Quantizing RNNs Without Losing Accuracy," Similar to convolution neural networks, recurrent neural networks (RNNs) +typically suffer from over-parameterization. Quantizing bit-widths of weights +and activations results in runtime efficiency on hardware, yet it often comes +at the cost of reduced accuracy. This paper proposes a quantization approach +that increases model size with bit-width reduction. This approach will allow +networks to perform at their baseline accuracy while still maintaining the +benefits of reduced precision and overall model size reduction. +" +Unsupervised Representation Adversarial Learning Network: from Reconstruction to Generation," A good representation for arbitrarily complicated data should have the +capability of semantic generation, clustering and reconstruction. Previous +research has already achieved impressive performance on either one. This paper +aims at learning a disentangled representation effective for all of them in an +unsupervised way. To achieve all the three tasks together, we learn the forward +and inverse mapping between data and representation on the basis of a symmetric +adversarial process. In theory, we minimize the upper bound of the two +conditional entropy loss between the latent variables and the observations +together to achieve the cycle consistency. The newly proposed RepGAN is tested +on MNIST, fashionMNIST, CelebA, and SVHN datasets to perform unsupervised or +semi-supervised classification, generation and reconstruction tasks. The result +demonstrates that RepGAN is able to learn a useful and competitive +representation. To the author's knowledge, our work is the first one to achieve +both a high unsupervised classification accuracy and low reconstruction error +on MNIST. +" +Quantitative Models of Imperfect Deception in Network Security using Signaling Games with Evidence," Deception plays a critical role in many interactions in communication and +network security. Game-theoretic models called ""cheap talk signaling games"" +capture the dynamic and information asymmetric nature of deceptive +interactions. But signaling games inherently model undetectable deception. In +this paper, we investigate a model of signaling games in which the receiver can +detect deception with some probability. This model nests traditional signaling +games and complete information Stackelberg games as special cases. We present +the pure strategy perfect Bayesian Nash equilibria of the game. Then we +illustrate these analytical results with an application to active network +defense. The presence of evidence forces majority-truthful behavior and +eliminates some pure strategy equilibria. It always benefits the deceived +player, but surprisingly sometimes also benefits the deceiving player. +" +Drawbacks and alternatives to the numerical calculation of the base inertial parameters expressions for low mobility mechanisms," Base inertial parameters constitute a minimal inertial parametrization of +mechanical systems that is of interest, for example, in parameter estimation +and model reduction. Numerical and symbolic methods are available to determine +their expressions. In this paper the problems associated with the numerical +determination of the base inertial parameters expressions in the context of low +mobility mechanisms are analyzed and discussed through and example. To +circumvent these problems two alternatives are proposed: a variable precision +arithmetic implementation of the customary numerical algorithm and the +application of a general symbolic method. Finally, the advantages of both +approaches compared to the numerical one are discussed in the context of the +proposed low mobility example. +" +Self-Paced Multitask Learning with Shared Knowledge," This paper introduces self-paced task selection to multitask learning, where +instances from more closely related tasks are selected in a progression of +easier-to-harder tasks, to emulate an effective human education strategy, but +applied to multitask machine learning. We develop the mathematical foundation +for the approach based on iterative selection of the most appropriate task, +learning the task parameters, and updating the shared knowledge, optimizing a +new bi-convex loss function. This proposed method applies quite generally, +including to multitask feature learning, multitask learning with alternating +structure optimization, etc. Results show that in each of the above +formulations self-paced (easier-to-harder) task selection outperforms the +baseline version of these methods in all the experiments. +" +Morphometric analysis of polygonal cracking patterns in desiccated starch slurries," We investigate the geometry of two-dimensional polygonal cracking that forms +on the air-exposed surface of dried starch slurries. Two different kinds of +starches, made from potato and corn, exhibited distinguished crack evolution, +and there were contrasting effects of slurry thickness on the probability +distribution of the polygonal cell area. The experimental findings are believed +to result from the difference in the shape and size of starch grains, which +strongly influence the capillary transport of water and tensile stress field +that drives the polygonal cracking. +" +Parameterized Shifted Combinatorial Optimization," Shifted combinatorial optimization is a new nonlinear optimization framework +which is a broad extension of standard combinatorial optimization, involving +the choice of several feasible solutions at a time. This framework captures +well studied and diverse problems ranging from so-called vulnerability problems +to sharing and partitioning problems. In particular, every standard +combinatorial optimization problem has its shifted counterpart, which is +typically much harder. Already with explicitly given input set the shifted +problem may be NP-hard. In this article we initiate a study of the +parameterized complexity of this framework. First we show that shifting over an +explicitly given set with its cardinality as the parameter may be in XP, FPT or +P, depending on the objective function. Second, we study the shifted problem +over sets definable in MSO logic (which includes, e.g., the well known MSO +partitioning problems). Our main results here are that shifted combinatorial +optimization over MSO definable sets is in XP with respect to the MSO formula +and the treewidth (or more generally clique-width) of the input graph, and is +W[1]-hard even under further severe restrictions. +" +Mermin-Wagner at the Crossover Temperature," Mermin-Wagner excludes spontaneous (staggered) magnetization in isotropic +ferromagnetic (antiferromagnetic) Heisenberg models at finite temperature in +spatial dimensions $d \le 2$. While the proof relies on the Bogoliubov +inequality, here we illuminate the theorem from an effective field theory point +of view. We estimate the crossover temperature $T_c$ and show that, in weak +external fields $H$, it tends to zero: $T_c \propto \sqrt{H}$ ($d=1$) and $T_c +\propto 1/|\ln H|$ ($d=2$). Including the case $d$=3, we derive upper bounds +for the (staggered) magnetization by combining microscopic and effective +perspectives -- unfortunately, these bounds are not restrictive. +" +The injectivity radius of Lie manifolds," We prove in a direct, geometric way that for any compatible Riemannian metric +on a Lie manifold the injectivity radius is positive +" +Inverse Risk-Sensitive Reinforcement Learning," We address the problem of inverse reinforcement learning in Markov decision +processes where the agent is risk-sensitive. In particular, we model +risk-sensitivity in a reinforcement learning framework by making use of models +of human decision-making having their origins in behavioral psychology, +behavioral economics, and neuroscience. We propose a gradient-based inverse +reinforcement learning algorithm that minimizes a loss function defined on the +observed behavior. We demonstrate the performance of the proposed technique on +two examples, the first of which is the canonical Grid World example and the +second of which is a Markov decision process modeling passengers' decisions +regarding ride-sharing. In the latter, we use pricing and travel time data from +a ride-sharing company to construct the transition probabilities and rewards of +the Markov decision process. +" +Correlations and confinement of excitations in an asymmetric Hubbard ladder," Correlation functions and low-energy excitations are investigated in the +asymmetric two-leg ladder consisting of a Hubbard chain and a noninteracting +tight-binding (Fermi) chain using the density matrix renormalization group +method. The behavior of charge, spin and pairing correlations is discussed for +the four phases found at half filling, namely, Luttinger liquid, Kondo-Mott +insulator, spin-gapped Mott insulator and correlated band insulator. +Quasi-long-range antiferromagnetic spin correlations are found in the Hubbard +leg in the Luttinger liquid phase only. Pair-density-wave correlations are +studied to understand the structure of bound pairs found in the Fermi leg of +the spin-gapped Mott phase at half filling and at light doping but we find no +enhanced pairing correlations. Low-energy excitations cause variations of spin +and charge densities on both legs that demonstrate the confinement of the +lowest charge excitations on the Fermi leg while the lowest spin excitations +are localized on the Hubbard leg in the three insulating phases. The velocities +of charge, spin, and single-particle excitations are investigated to clarify +the confinement of elementary excitations in the Luttinger liquid phase. The +observed spatial separation of elementary spin and charge excitations could +facilitate the coexistence of different (quasi-)long-range orders in +higher-dimensional extensions of the asymmetric Hubbard ladder. +" +Notes about collision monochromatization in $e^+e^-$ colliders," The manuscript describes several monochromatization schemes starting from +A.~Renieri \cite{ref:Renieri} proposal for head-on collisions based on +correlation between particles transverse position and energy deviation. We +briefly explain initial proposal and expand it for crossing angle collisions. +Then we discuss new monochromatization scheme for crossing angle collisions +based on correlation between particles longitudinal position and energy +deviation. +" +Combinatorial Cost Sharing," We introduce a combinatorial variant of the cost sharing problem: several +services can be provided to each player and each player values every +combination of services differently. A publicly known cost function specifies +the cost of providing every possible combination of services. A combinatorial +cost sharing mechanism is a protocol that decides which services each player +gets and at what price. We look for dominant strategy mechanisms that are +(economically) efficient and cover the cost, ideally without overcharging +(i.e., budget balanced). Note that unlike the standard cost sharing setting, +combinatorial cost sharing is a multi-parameter domain. This makes designing +dominant strategy mechanisms with good guarantees a challenging task. +We present the Potential Mechanism -- a combination of the VCG mechanism and +a well-known tool from the theory of cooperative games: Hart and Mas-Colell's +potential function. The potential mechanism is a dominant strategy mechanism +that always covers the incurred cost. When the cost function is subadditive the +same mechanism is also approximately efficient. Our main technical contribution +shows that when the cost function is submodular the potential mechanism is +approximately budget balanced in three settings: supermodular valuations, +symmetric cost function and general symmetric valuations, and two players with +general valuations. +" +A new family of one-coincidence sets of sequences with dispersed elements for frequency hopping CDMA systems," We present a new family of one-coincidence sequence sets suitable for +frequency hopping code division multiple access (FH-CDMA) systems with +dispersed (low density) sequence elements. These sets are derived from +one-coincidence prime sequence sets, such that for each one-coincidence prime +sequence set there is a new one-coincidence set comprised of sequences with +dispersed sequence elements, required in some circumstances, for FH-CDMA +systems. Getting rid of crowdedness of sequence elements is achieved by +doubling the size of the sequence element alphabet. In addition, this doubling +process eases control over the distance between adjacent sequence elements. +Properties of the new sets are discussed. +" +Crowdsourcing with Tullock contests: A new perspective," Incentive mechanisms for crowdsourcing have been extensively studied under +the framework of all-pay auctions. Along a distinct line, this paper proposes +to use Tullock contests as an alternative tool to design incentive mechanisms +for crowdsourcing. We are inspired by the conduciveness of Tullock contests to +attracting user entry (yet not necessarily a higher revenue) in other domains. +In this paper, we explore a new dimension in optimal Tullock contest design, by +superseding the contest prize---which is fixed in conventional Tullock +contests---with a prize function that is dependent on the (unknown) winner's +contribution, in order to maximize the crowdsourcer's utility. We show that +this approach leads to attractive practical advantages: (a) it is well-suited +for rapid prototyping in fully distributed web agents and smartphone apps; (b) +it overcomes the disincentive to participate caused by players' antagonism to +an increasing number of rivals. Furthermore, we optimize conventional, +fixed-prize Tullock contests to construct the most superior benchmark to +compare against our mechanism. Through extensive evaluations, we show that our +mechanism significantly outperforms the optimal benchmark, by over three folds +on the crowdsourcer's utility cum profit and up to nine folds on the players' +social welfare. +" +Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent," Most distributed machine learning systems nowadays, including TensorFlow and +CNTK, are built in a centralized fashion. One bottleneck of centralized +algorithms lies on high communication cost on the central node. Motivated by +this, we ask, can decentralized algorithms be faster than its centralized +counterpart? +Although decentralized PSGD (D-PSGD) algorithms have been studied by the +control community, existing analysis and theory do not show any advantage over +centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario +where only the decentralized network is available. In this paper, we study a +D-PSGD algorithm and provide the first theoretical analysis that indicates a +regime in which decentralized algorithms might outperform centralized +algorithms for distributed stochastic gradient descent. This is because D-PSGD +has comparable total computational complexities to C-PSGD but requires much +less communication cost on the busiest node. We further conduct an empirical +study to validate our theoretical analysis across multiple frameworks (CNTK and +Torch), different network configurations, and computation platforms up to 112 +GPUs. On network configurations with low bandwidth or high latency, D-PSGD can +be up to one order of magnitude faster than its well-optimized centralized +counterparts. +" +Learning User Intent from Action Sequences on Interactive Systems," Interactive systems have taken over the web and mobile space with increasing +participation from users. Applications across every marketing domain can now be +accessed through mobile or web where users can directly perform certain actions +and reach a desired outcome. Actions of user on a system, though, can be +representative of a certain intent. Ability to learn this intent through user's +actions can help draw certain insight into the behavior of users on a system. +In this paper, we present models to optimize interactive systems by learning +and analyzing user intent through their actions on the system. We present a +four phased model that uses time-series of interaction actions sequentially +using a Long Short-Term Memory (LSTM) based sequence learning system that helps +build a model for intent recognition. Our system then provides an objective +specific maximization followed by analysis and contrasting methods in order to +identify spaces of improvement in the interaction system. We discuss deployment +scenarios for such a system and present results from evaluation on an online +marketplace using user clickstream data. +" +Laman Graphs are Generically Bearing Rigid in Arbitrary Dimensions," This paper addresses the problem of constructing bearing rigid networks in +arbitrary dimensions. We first show that the bearing rigidity of a network is a +generic property that is critically determined by the underlying graph of the +network. A new notion termed generic bearing rigidity is defined for graphs. If +the underlying graph of a network is generically bearing rigid, then the +network is bearing rigid for almost all configurations; otherwise, the network +is not bearing rigid for any configuration. As a result, the key to construct +bearing rigid networks is to construct generically bearing rigid graphs. The +main contribution of this paper is to prove that Laman graphs, which can be +generated by the Henneberg construction, are generically bearing rigid in +arbitrary dimensions. As a consequence, if the underlying graph of a network is +Laman, the network is bearing rigid for almost all configurations in arbitrary +dimensions. +" +"Finding Dominating Induced Matchings in $(S_{2,2,3})$-Free Graphs in Polynomial Time"," Let $G=(V,E)$ be a finite undirected graph. An edge set $E' \subseteq E$ is a +{\em dominating induced matching} ({\em d.i.m.}) in $G$ if every edge in $E$ is +intersected by exactly one edge of $E'$. The \emph{Dominating Induced Matching} +(\emph{DIM}) problem asks for the existence of a d.i.m.\ in $G$; this problem +is also known as the \emph{Efficient Edge Domination} problem; it is the +Efficient Domination problem for line graphs. +The DIM problem is motivated by parallel resource allocation problems, +encoding theory and network routing. It is \NP-complete even for very +restricted graph classes such as planar bipartite graphs with maximum degree 3 +and is solvable in linear time for $P_7$-free graphs, and in polynomial time +for $S_{1,2,4}$-free graphs as well as for $S_{2,2,2}$-free graphs. In this +paper, combining two distinct approaches, we solve it in polynomial time for +$S_{2,2,3}$-free graphs. +" +Symmetry reduction and soliton-like solutions for the generalized Korteweg-de Vries equation," We analyze the gKdV equation, a generalized version of Korteweg-de Vries with +an arbitrary function $f(u)$. In general, for a function $f(u)$ the Lie algebra +of symmetries of gKdV is the $2$-dimensional Lie algebra of translations of the +plane $xt$. This implies the existence of plane wave solutions. Indeed, for +some specific values of $f(u)$ the equation gKdV admits a Lie algebra of +symmetries of dimension grater than $2$. We compute the similarity reductions +corresponding to these exceptional symmetries. We prove that the gKdV equation +has soliton-like solutions under some general assumptions, and we find a closed +formula for the plane wave solutions, that are of hyperbolic secant type. +" +Existence theory for magma equations in dimension two and higher," We examine a degenerate, dispersive, nonlinear wave equation related to the +evolution of partially molten rock in dimensions two and higher. This +simplified model, for a scalar field capturing the melt fraction by volume, has +been studied by direct numerical simulation where it has been observed to +develop stable solitary waves. In this work, we prove local in time +well-posedness results for the time dependent equation, on both the whole space +and the torus, for dimensions two and higher. We also prove the existence of +the solitary wave solutions in dimensions two and higher. +" +A short guide through integration theorems of generalized distributions," The generalization of Frobenius' theorem to foliations with singularities is +usually attributed to Stefan and Sussmann, for their simultaneous discovery +around 1973. However, their result is often referred to without caring much on +the precise statement, as some sort of magic spell. This may be explained by +the fact that the literature is not consensual on a unique formulation of the +theorem, and because the history of the research leading to this result has +been flawed by many claims that turned to be refuted some years later. This, +together with the difficulty of doing proof-reading on this topic, brought much +confusion about the precise statement of Stefan-Sussmann's theorem. This paper +is dedicated to bring some light on this subject, by investigating the +different statements and arguments that were put forward in geometric control +theory between 1962 and 1994 regarding the problem of integrability of +generalized distributions. We will present the genealogy of the main ideas and +show that many mathematicians that were involved in this field made some +mistakes that were successfully refuted. Moreover, we want to address the +prominent influence of Hermann on this topic, as well as the fact that some +statements of Stefan and Sussmann turned to be wrong. In this paper, we intend +to provide the reader with a deeper understanding of the problem of +integrability of generalized distributions, and to reduce the confusion +surrounding these difficult questions. +" +Polarization induced interference within electromagnetically induced transparency for atoms of double-V linkage," People have been paying attention to the role of atoms' complex internal +level structures in the research of electromagnetically induced transparency +(EIT) for a long time, where the various degenerate Zeeman levels usually +generate complex linkage patterns for the atomic transitions. It turns out, +with special choices of the atomic states and the atomic transitions' linkage +structure, clear signatures of quantum interference induced by the probe and +coupling light's polarizations can emerge from a typical EIT phenomena. We +propose to study a four state system with double-V linkage pattern for the +transitions and analyze the polarization induced interference under the EIT +condition. We show that such interference arises naturally under mild +conditions on the optical field and atom manipulation. Its anticipated +properties and its potential application of all optical switching in +polarization degree of freedom are also discussed. Moreover, we construct a +variation form of double-M linkage pattern where the polarization induced +interference enables polarization-dependent cross-modulation between incident +lights that can be effective even at the few-photon level. The theme is to gain +more insight into the essential question: how can we build non-trivial optical +medium where incident lights will induce polarization-dependent non-linear +optical interactions, covering a wide range of the incidence intensity from the +many-photon level to the few-photon level, respectively. +" +A note on X-rays of permutations and a problem of Brualdi and Fritscher," The subject of this note is a challenging conjecture about X-rays of +permutations which is a special case of a conjecture regarding Skolem +sequences. In relation to this, Brualdi and Fritscher [Linear Algebra and its +Applications, 2014] posed the following problem: Determine a bijection between +extremal Skolem sets and binary Hankel X-rays of permutation matrices. We give +such a bijection, along with some related observations. +" +Differential Evolution and Bayesian Optimisation for Hyper-Parameter Selection in Mixed-Signal Neuromorphic Circuits Applied to UAV Obstacle Avoidance," The Lobula Giant Movement Detector (LGMD) is a an identified neuron of the +locust that detects looming objects and triggers its escape responses. +Understanding the neural principles and networks that lead to these fast and +robust responses can lead to the design of efficient facilitate obstacle +avoidance strategies in robotic applications. Here we present a neuromorphic +spiking neural network model of the LGMD driven by the output of a neuromorphic +Dynamic Vision Sensor (DVS), which has been optimised to produce robust and +reliable responses in the face of the constraints and variability of its mixed +signal analogue-digital circuits. As this LGMD model has many parameters, we +use the Differential Evolution (DE) algorithm to optimise its parameter space. +We also investigate the use of Self-Adaptive Differential Evolution (SADE) +which has been shown to ameliorate the difficulties of finding appropriate +input parameters for DE. We explore the use of two biological mechanisms: +synaptic plasticity and membrane adaptivity in the LGMD. We apply DE and SADE +to find parameters best suited for an obstacle avoidance system on an unmanned +aerial vehicle (UAV), and show how it outperforms state-of-the-art Bayesian +optimisation used for comparison. +" +Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking," Methods that learn representations of nodes in a graph play a critical role +in network analysis since they enable many downstream learning tasks. We +propose Graph2Gauss - an approach that can efficiently learn versatile node +embeddings on large scale (attributed) graphs that show strong performance on +tasks such as link prediction and node classification. Unlike most approaches +that represent nodes as point vectors in a low-dimensional continuous space, we +embed each node as a Gaussian distribution, allowing us to capture uncertainty +about the representation. Furthermore, we propose an unsupervised method that +handles inductive learning scenarios and is applicable to different types of +graphs: plain/attributed, directed/undirected. By leveraging both the network +structure and the associated node attributes, we are able to generalize to +unseen nodes without additional training. To learn the embeddings we adopt a +personalized ranking formulation w.r.t. the node distances that exploits the +natural ordering of the nodes imposed by the network structure. Experiments on +real world networks demonstrate the high performance of our approach, +outperforming state-of-the-art network embedding methods on several different +tasks. Additionally, we demonstrate the benefits of modeling uncertainty - by +analyzing it we can estimate neighborhood diversity and detect the intrinsic +latent dimensionality of a graph. +" +Explaining Anomalies in Groups with Characterizing Subspace Rules," Anomaly detection has numerous applications and has been studied vastly. We +consider a complementary problem that has a much sparser literature: anomaly +description. Interpretation of anomalies is crucial for practitioners for +sense-making, troubleshooting, and planning actions. To this end, we present a +new approach called x-PACS (for eXplaining Patterns of Anomalies with +Characterizing Subspaces), which ""reverse-engineers"" the known anomalies by +identifying (1) the groups (or patterns) that they form, and (2) the +characterizing subspace and feature rules that separate each anomalous pattern +from normal instances. Explaining anomalies in groups not only saves analyst +time and gives insight into various types of anomalies, but also draws +attention to potentially critical, repeating anomalies. +In developing x-PACS, we first construct a desiderata for the anomaly +description problem. From a descriptive data mining perspective, our method +exhibits five desired properties in our desiderata. Namely, it can unearth +anomalous patterns (i) of multiple different types, (ii) hidden in arbitrary +subspaces of a high dimensional space, (iii) interpretable by the analysts, +(iv) different from normal patterns of the data, and finally (v) succinct, +providing the shortest data description. Furthermore, x-PACS is highly +parallelizable and scales linearly in terms of data size. +No existing work on anomaly description satisfies all of these properties +simultaneously. While not our primary goal, the anomalous patterns we find +serve as interpretable ""signatures"" and can be used for detection. We show the +effectiveness of x-PACS in explanation as well as detection on real-world +datasets as compared to state-of-the-art. +" +VAMPnets: Deep learning of molecular kinetics," There is an increasing demand for computing the relevant structures, +equilibria and long-timescale kinetics of biomolecular processes, such as +protein-drug binding, from high-throughput molecular dynamics simulations. +Current methods employ transformation of simulated coordinates into structural +features, dimension reduction, clustering the dimension-reduced data, and +estimation of a Markov state model or related model of the interconversion +rates between molecular structures. This handcrafted approach demands a +substantial amount of modeling expertise, as poor decisions at any step will +lead to large modeling errors. Here we employ the variational approach for +Markov processes (VAMP) to develop a deep learning framework for molecular +kinetics using neural networks, dubbed VAMPnets. A VAMPnet encodes the entire +mapping from molecular coordinates to Markov states, thus combining the whole +data processing pipeline in a single end-to-end framework. Our method performs +equally or better than state-of-the art Markov modeling methods and provides +easily interpretable few-state kinetic models. +" +Toward deciphering developmental patterning with deep neural network," Dynamics of complex biological systems is driven by intricate networks, the +current knowledge of which are often incomplete. The traditional systems +biology modeling usually implements an ad hoc fixed set of differential +equations with predefined function forms. Such an approach often suffers from +overfitting or underfitting and thus inadequate predictive power, especially +when dealing with systems of high complexity. This problem could be overcome by +deep neuron network (DNN). Choosing pattern formation of the gap genes in +Drosophila early embryogenesis as an example, we established a differential +equation model whose synthesis term is expressed as a DNN. The model yields +perfect fitting and impressively accurate predictions on mutant patterns. We +further mapped the trained DNN into a simplified conventional regulation +network, which is consistent with the existing body of knowledge. The DNN model +could lay a foundation of ""in-silico-embryo"", which can regenerate a great +variety of interesting phenomena, and on which one can perform all kinds of +perturbations to discover underlying mechanisms. This approach can be readily +applied to a variety of complex biological systems. +" +Whispered-to-voiced Alaryngeal Speech Conversion with Generative Adversarial Networks," Most methods of voice restoration for patients suffering from aphonia either +produce whispered or monotone speech. Apart from intelligibility, this type of +speech lacks expressiveness and naturalness due to the absence of pitch +(whispered speech) or artificial generation of it (monotone speech). Existing +techniques to restore prosodic information typically combine a vocoder, which +parameterises the speech signal, with machine learning techniques that predict +prosodic information. In contrast, this paper describes an end-to-end neural +approach for estimating a fully-voiced speech waveform from whispered +alaryngeal speech. By adapting our previous work in speech enhancement with +generative adversarial networks, we develop a speaker-dependent model to +perform whispered-to-voiced speech conversion. Preliminary qualitative results +show effectiveness in re-generating voiced speech, with the creation of +realistic pitch contours. +" +A graph-theoretic description of scale-multiplicative semigroups of automorphisms," It is shown that a flat subgroup, $H$, of the totally disconnected, locally +compact group $G$ decomposes into a finite number of subsemigroups on which the +scale function is multiplicative. The image, $P$, of a multiplicative semigroup +in the quotient, $H/H(1)$, of $H$ by its uniscalar subgroup has a unique +minimal generating set which determines a natural Cayley graph structure on +$P$. For each compact, open subgroup $U$ of $G$, a graph is defined and it is +shown that if $P$ is multiplicative over $U$ then this graph is a regular, +rooted, strongly simple $P$-graph. This extends to higher rank the result of R. +Möller that $U$ is tidy for $x$ if and only if a certain graph is a regular, +rooted tree. +" +Half-Heusler alloy LiBaBi: A new topological semimetal with five-fold band degeneracy," Based on first-principles study, we report the finding of a new topological +semimetal LiBaBi in half-Heusler phase. The remarkable feature of this +nonmagnetic, inversion-symmetry-breaking material is that it consists of only +simple $s$- and $p$-block elements. Interestingly, the material is ordinary +insulator in the absence of spin-orbit coupling (SOC) and becomes nodal-surface +topological semimetal showing drumhead states when SOC is included. This is in +stark contrast to other nodal-line and nodal-surface semimetals, where the +extended nodal structure is destroyed once SOC is included. Importantly, the +linear band crossings host three-, four-, five- and six-fold degeneracies near +the Fermi level, making this compound very attractive for the study of +`unconventional' fermions. The band crossing points form a three-dimensional +nodal structure around the zone center at the Fermi level. We identify the +surface states responsible for the appearance of the drumhead states. The alloy +also shows a phase transition from topological semimetal to a trivial insulator +on application of pressure. In addition to revealing an intriguing effect of +SOC on the nodal structure, our findings introduce a new half-Heusler alloy in +the family of topological semimetals, thus creating more avenues for +experimental exploration. +" +A task-driven implementation of a simple numerical solver for hyperbolic conservation laws," This article describes the implementation of an all-in-one numerical +procedure within the runtime StarPU. In order to limit the complexity of the +method, for the sake of clarity of the presentation of the non-classical +task-driven programming environnement, we have limited the numerics to first +order in space and time. Results show that the task distribution is efficient +if the tasks are numerous and individually large enough so that the task heap +can be saturated by tasks which computational time covers the task management +overhead. Next, we also see that even though they are mostly faster on graphic +cards, not all the tasks are suitable for GPUs, which brings forward the +importance of the task scheduler. Finally, we look at a more realistic system +of conservation laws with an expensive source term, what allows us to conclude +and open on future works involving higher local arithmetic intensity, by +increasing the order of the numerical method or by enriching the model +(increased number of parameters and therefore equations). +" +Composition of PPT Maps," M. Christandl conjectured that the composition of any trace preserving PPT +map with itself is entanglement breaking. We prove that Christandl's conjecture +holds asymptotically by showing that the distance between the iterates of any +unital or trace preserving PPT map and the set of entanglement breaking maps +tends to zero. Finally, for every graph we define a one-parameter family of +maps on matrices and determine the least value of the parameter such that the +map is variously, positive, completely positive, PPT and entanglement breaking +in terms of properties of the graph. Our estimates are sharp enough to conclude +that Christandl's conjecture holds for these families. +" +Multi-View Image Generation from a Single-View," This paper addresses a challenging problem -- how to generate multi-view +cloth images from only a single view input. To generate realistic-looking +images with different views from the input, we propose a new image generation +model termed VariGANs that combines the strengths of the variational inference +and the Generative Adversarial Networks (GANs). Our proposed VariGANs model +generates the target image in a coarse-to-fine manner instead of a single pass +which suffers from severe artifacts. It first performs variational inference to +model global appearance of the object (e.g., shape and color) and produce a +coarse image with a different view. Conditioned on the generated low resolution +images, it then proceeds to perform adversarial learning to fill details and +generate images of consistent details with the input. Extensive experiments +conducted on two clothing datasets, MVC and DeepFashion, have demonstrated that +images of a novel view generated by our model are more plausible than those +generated by existing approaches, in terms of more consistent global appearance +as well as richer and sharper details. +" +Equidistribution of Neumann data mass on simplices and a simple inverse problem," In this paper we study the behaviour of the Neumann data of Dirichlet +eigenfunctions on simplices. We prove that the $L^2$ norm of the +(semi-classical) Neumann data on each face is equal to $2/n$ times the +$(n-1)$-dimensional volume of the face divided by the volume of the simplex. +This is a generalization of \cite{Chr-tri} to higher dimensions. Again it is +{\it not} an asymptotic, but an exact formula. The proof is by simple +integrations by parts and linear algebra. +We also consider the following inverse problem: do the {\it norms} of the +Neumann data on a simplex determine a constant coefficient elliptic operator? +The answer is yes in dimension 2 and no in higher dimensions. +" +A machine learning approach for efficient uncertainty quantification using multiscale methods," Several multiscale methods account for sub-grid scale features using coarse +scale basis functions. For example, in the Multiscale Finite Volume method the +coarse scale basis functions are obtained by solving a set of local problems +over dual-grid cells. We introduce a data-driven approach for the estimation of +these coarse scale basis functions. Specifically, we employ a neural network +predictor fitted using a set of solution samples from which it learns to +generate subsequent basis functions at a lower computational cost than solving +the local problems. The computational advantage of this approach is realized +for uncertainty quantification tasks where a large number of realizations has +to be evaluated. We attribute the ability to learn these basis functions to the +modularity of the local problems and the redundancy of the permeability patches +between samples. The proposed method is evaluated on elliptic problems yielding +very promising results. +" +Quadratic Upper Bound for Recursive Teaching Dimension of Finite VC Classes," In this work we study the quantitative relation between the recursive +teaching dimension (RTD) and the VC dimension (VCD) of concept classes of +finite sizes. The RTD of a concept class $\mathcal C \subseteq \{0, 1\}^n$, +introduced by Zilles et al. (2011), is a combinatorial complexity measure +characterized by the worst-case number of examples necessary to identify a +concept in $\mathcal C$ according to the recursive teaching model. +For any finite concept class $\mathcal C \subseteq \{0,1\}^n$ with +$\mathrm{VCD}(\mathcal C)=d$, Simon & Zilles (2015) posed an open problem +$\mathrm{RTD}(\mathcal C) = O(d)$, i.e., is RTD linearly upper bounded by VCD? +Previously, the best known result is an exponential upper bound +$\mathrm{RTD}(\mathcal C) = O(d \cdot 2^d)$, due to Chen et al. (2016). In this +paper, we show a quadratic upper bound: $\mathrm{RTD}(\mathcal C) = O(d^2)$, +much closer to an answer to the open problem. We also discuss the challenges in +fully solving the problem. +" +Joining Jolie to Docker - Orchestration of Microservices on a Containers-as-a-Service Layer," Cloud computing is steadily growing and, as IaaS vendors have started to +offer pay-as-you-go billing policies, it is fundamental to achieve as much +elasticity as possible, avoiding over-provisioning that would imply higher +costs. In this paper, we briefly analyse the orchestration characteristics of +PaaSSOA, a proposed architecture already implemented for Jolie microservices, +and Kubernetes, one of the various orchestration plugins for Docker; then, we +outline similarities and differences of the two approaches, with respect to +their own domain of application. Furthermore, we investigate some ideas to +achieve a federation of the two technologies, proposing an architectural +composition of Jolie microservices on Docker Container-as-a-Service layer. +" +On Calculation of Bounds for Greedy Algorithms when Applied to Sensor Selection Problems," We consider the problem of studying the performance of greedy algorithm on +sensor selection problem for stable linear systems with Kalman Filter. +Specifically, the objective is to find the system parameters that affects the +performance of greedy algorithms and conditions where greedy algorithm always +produces optimal solutions. In this paper, we developed an upper bound for +performance ratio of greedy algorithm, which is based on the work of Dr.Zhang +\cite{Sundaram} and offers valuable insight into the system parameters that +affects the performance of greedy algorithm. We also proposes a set of +conditions where greedy algorithm will always produce the optimal solution. We +then show in simulations how the system parameters mentioned by the performance +ratio bound derived in this work affects the performance of greedy algorithm. +" +Generators of reductions of ideals in a local Noetherian ring with finite residue field," Let $(R,\mathfrak{m})$ be a local Noetherian ring with residue field $k$. +While much is known about the generating sets of reductions of ideals of $R$ if +$k$ is infinite, the case in which $k$ is finite is less well understood. We +investigate the existence (or lack thereof) of proper reductions of an ideal of +$R$ and the number of generators needed for a reduction in the case $k$ is a +finite field. When $R$ is one-dimensional, we give a formula for the smallest +integer $n$ for which every ideal has an $n$-generated reduction. It follows +that in a one-dimensional local Noetherian ring every ideal has a principal +reduction if and only if the number of maximal ideals in the normalization of +the reduced quotient of $R$ is at most $|k|$. In higher dimensions, we show +that for any positive integer, there exists an ideal of $R$ that does not have +an $n$-generated reduction and that if $n \geq \dim R$ this ideal can be chosen +to be $\mathfrak{m}$-primary. In the case where $R$ is a two-dimensional +regular local ring, we construct an example of an integrally closed +$\mathfrak{m}$-primary ideal that does not have a $2$-generated reduction and +thus answer in the negative a question raised by Heinzer and Shannon. +" +Particle-like Structure of Lie algebras," If a Lie algebra structure g on a vector space is the sum of a family of +mutually compatible Lie algebra structures g_i's, we say that g is simply +assembled from the g_i's. Repeating this procedure with a number of Lie +algebras, themselves simply assembled from the g_i's, one obtains a Lie algebra +assembled in two steps from the g_i's, and so on. We describe the process of +modular disassembling of a Lie algebra into a unimodular and a non-unimodular +part. We then study two inverse questions: which Lie algebras can be assembled +from a given family of Lie algebras, and from which Lie algebras can a given +Lie algebra be assembled? We develop some basic assembling and disassembling +techniques that constitute the elements of a new approach to the general theory +of Lie algebras. The main result of our theory is that any finite-dimensional +Lie algebra over an algebraically closed field of characteristic zero or over R +can be assembled in a finite number of steps from two elementary constituents, +which we call dyons and triadons. Up to an abelian summand, a dyon is a Lie +algebra structure isomorphic to the non-abelian 2-dimensional Lie algebra, +while a triadon is isomorphic to the 3-dimensional Heisenberg Lie algebra. As +an example, we describe constructions of classical Lie algebras from triadons. +" +"Reversed Dickson polynomials of the $(k+1)$-th kind over finite fields, II"," Let $p$ be an odd prime. In this paper, we study the permutation behaviour of +the reversed Dickson polynomials of the $(k+1)$-th kind $D_{n,k}(1,x)$ when +$n=p^{l_1}+3$, $n=p^{l_1}+p^{l_2}+p^{l_3}$, and +$n=p^{l_1}+p^{l_2}+p^{l_3}+p^{l_4}$, where $l_1, l_2$, $l_3$, and $l_4$ are +non-negative integers. A generalization to $n=p^{l_1}+p^{l_2}+\cdots +p^{l_i}$ +is also shown. We find some conditions under which $D_{n,k}(1,x)$ is not a +permutation polynomial over finite fields for certain values of $n$ and $k$. We +also present a generalization of a recent result regarding $D_{p^l-1,1}(1,x)$ +and present some algebraic and arithmetic properties of $D_{n,k}(1,x)$. +" +Improving Session Recommendation with Recurrent Neural Networks by Exploiting Dwell Time," Recently, Recurrent Neural Networks (RNNs) have been applied to the task of +session-based recommendation. These approaches use RNNs to predict the next +item in a user session based on the previ- ously visited items. While some +approaches consider additional item properties, we argue that item dwell time +can be used as an implicit measure of user interest to improve session-based +item recommen- dations. We propose an extension to existing RNN approaches that +captures user dwell time in addition to the visited items and show that +recommendation performance can be improved. Additionally, we investigate the +usefulness of a single validation split for model selection in the case of +minor improvements and find that in our case the best model is not selected and +a fold-like study with different validation sets is necessary to ensure the +selection of the best model. +" +Universal Spatiotemporal Sampling Sets for Discrete Spatially Invariant Evolution Systems," Let $(I,+)$ be a finite abelian group and $\mathbf{A}$ be a circular +convolution operator on $\ell^2(I)$. The problem under consideration is how to +construct minimal $\Omega \subset I$ and $l_i$ such that $Y=\{\mathbf{e}_i, +\mathbf{A}\mathbf{e}_i, \cdots, \mathbf{A}^{l_i}\mathbf{e}_i: i\in \Omega\}$ is +a frame for $\ell^2(I)$, where $\{\mathbf{e}_i: i\in I\}$ is the canonical +basis of $\ell^2(I)$. This problem is motivated by the spatiotemporal sampling +problem in discrete spatially invariant evolution systems. We will show that +the cardinality of $\Omega $ should be at least equal to the largest geometric +multiplicity of eigenvalues of $\mathbf{A}$, and we consider the universal +spatiotemporal sampling sets $(\Omega, l_i)$ for convolution operators +$\mathbf{A}$ with eigenvalues subject to the same largest geometric +multiplicity. We will give an algebraic characterization for such sampling sets +and show how this problem is linked with sparse signal processing theory and +polynomial interpolation theory. +" +Self-Organizing Maps Classification with Application to Laptop's Adapters Magnetic Field," This paper presents an application of the Self-Organizing-Map classification +method, which is used for classification of the extremely low frequency +magnetic field emission in the near neighborhood of the laptop adapters. The +experiment is performed on different laptop adapters of the same +characteristics. After that, the Self-Organizing-Map classification on the +obtained emission data is performed. The classification results establish the +typical emission levels of the laptop adapters, which are far above the safety +standards' limit. At the end, a discussion is carried out about the importance +of using the classification as a possible solution for safely use the laptop +adapters in order to reduce the negative effects of the magnetic field emission +to the laptop users. +" +Attention Models in Graphs: A Survey," Graph-structured data arise naturally in many different application domains. +By representing data as graphs, we can capture entities (i.e., nodes) as well +as their relationships (i.e., edges) with each other. Many useful insights can +be derived from graph-structured data as demonstrated by an ever-growing body +of work focused on graph mining. However, in the real-world, graphs can be both +large - with many complex patterns - and noisy which can pose a problem for +effective graph mining. An effective way to deal with this issue is to +incorporate ""attention"" into graph mining solutions. An attention mechanism +allows a method to focus on task-relevant parts of the graph, helping it to +make better decisions. In this work, we conduct a comprehensive and focused +survey of the literature on the emerging field of graph attention models. We +introduce three intuitive taxonomies to group existing work. These are based on +problem setting (type of input and output), the type of attention mechanism +used, and the task (e.g., graph classification, link prediction, etc.). We +motivate our taxonomies through detailed examples and use each to survey +competing approaches from a unique standpoint. Finally, we highlight several +challenges in the area and discuss promising directions for future work. +" +Multiplexing 200 modes on a single digital hologram," The on-demand tailoring of light's spatial shape is of great relevance in a +wide variety of research areas. Computer-controlled devices, such as Spatial +Light Modulators (SLMs) or Digital Micromirror Devices (DMDs), offer a very +accurate, flexible and fast holographic means to this end. Remarkably, digital +holography affords the simultaneous generation of multiple beams +(multiplexing), a tool with numerous applications in many fields. Here, we +provide a self-contained tutorial on light beam multiplexing. Through the use +of several examples, the readers will be guided step by step in the process of +light beam shaping and multiplexing. Additionally, on the multiplexing +capabilities of SLMs to provide a quantitative analysis on the maximum number +of beams that can be multiplexed on a single SLM, showing approximately 200 +modes on a single hologram. +" +Ranking influential spreaders is an ill-defined problem," Finding influential spreaders of information and disease in networks is an +important theoretical problem, and one of considerable recent interest. It has +been almost exclusively formulated as a node-ranking problem -- methods for +identifying influential spreaders rank nodes according to how influential they +are. In this work, we show that the ranking approach does not necessarily work: +the set of most influential nodes depends on the number of nodes in the set. +Therefore, the set of $n$ most important nodes to vaccinate does not need to +have any node in common with the set of $n+1$ most important nodes. We propose +a method for quantifying the extent and impact of this phenomenon, and show +that it is common in both empirical and model networks. +" +Fixing a Broken ELBO," Recent work in unsupervised representation learning has focused on learning +deep directed latent-variable models. Fitting these models by maximizing the +marginal likelihood or evidence is typically intractable, thus a common +approximation is to maximize the evidence lower bound (ELBO) instead. However, +maximum likelihood training (whether exact or approximate) does not necessarily +result in a good latent representation, as we demonstrate both theoretically +and empirically. In particular, we derive variational lower and upper bounds on +the mutual information between the input and the latent variable, and use these +bounds to derive a rate-distortion curve that characterizes the tradeoff +between compression and reconstruction accuracy. Using this framework, we +demonstrate that there is a family of models with identical ELBO, but different +quantitative and qualitative characteristics. Our framework also suggests a +simple new method to ensure that latent variable models with powerful +stochastic decoders do not ignore their latent code. +" +Wavelet Shrinkage and Thresholding based Robust Classification for Brain Computer Interface," A macaque monkey is trained to perform two different kinds of tasks, memory +aided and visually aided. In each task, the monkey saccades to eight possible +target locations. A classifier is proposed for direction decoding and task +decoding based on local field potentials (LFP) collected from the prefrontal +cortex. The LFP time-series data is modeled in a nonparametric regression +framework, as a function corrupted by Gaussian noise. It is shown that if the +function belongs to Besov bodies, then using the proposed wavelet shrinkage and +thresholding based classifier is robust and consistent. The classifier is then +applied to the LFP data to achieve high decoding performance. The proposed +classifier is also quite general and can be applied for the classification of +other types of time-series data as well, not necessarily brain data. +" +SCALAR - Simultaneous Calibration of 2D Laser and Robot's Kinematic Parameters Using Three Planar Constraints," Industrial robots are increasingly used in various applications where the +robot accuracy becomes very important, hence calibrations of the robot's +kinematic parameters and the measurement system's extrinsic parameters are +required. However, the existing calibration approaches are either too +cumbersome or require another expensive external measurement system such as +laser tracker or measurement spinarm. In this paper, we propose SCALAR, a +calibration method to simultaneously improve the kinematic parameters of a +6-DoF robot and the extrinsic parameters of a 2D Laser Range Finder (LRF) which +is attached to the robot. Three flat planes are placed around the robot, and +for each plane the robot moves to several poses such that the LRF's ray +intersect the respective plane. Geometric planar constraints are then used to +optimize the calibration parameters using Levenberg- Marquardt nonlinear +optimization algorithm. We demonstrate through simulations that SCALAR can +reduce the average position and orientation errors of the robot system from +14.6mm and 4.05 degrees to 0.09mm and 0.02 degrees. +" +"Twitter adoption, students perceptions, Big Five personality traits and learning outcome: Lessons learned from 3 case studies"," This study presents the results of the introduction of Twitter in the +educational process. It examines the relationship of the tool s use with the +participants learning outcome through a series of well-organized activities. +Three studies were conducted in the context of 2 academic courses. The +participation in the Twitter activity was voluntarily for the students. In all +3 studies the students who participated in the process had a higher laboratory +grade than the students who did not participated. Students Conscientiousness +and Openness to experience were related to their activity in one study. +However, no relationship between the students personality traits and their +grade was unveiled. Moreover, the students interventions in the activities are +examined as well as the variation in their attitudes towards social media use +in learning. The implications of the conducted studies are discussed +extensively and a comparison with other related studies is presented. +" +Entropic theory of Gravitation," We construct a manifestly Machian theory of gravitation on the foundation +that information in the universe cannot be destroyed (Landauer's principle). If +no bit of information in the Universe is lost, than the sum of the entropies of +the geometric and the matter fields should be conserved. We propose a local +invariant expression for the entropy of the geometric field and formulate a +variational principle on the entropic functional which produces entropic field +equations. This information-theoretic approach implies that the geometric field +does not exist in an empty of matter Universe, the material entropy is geometry +dependent, matter can exchange information (entropy) with the geometric field +and a quantum condensate can channel energy into the geometric field at a +particular coherent state. The entropic field equations feature a non-intuitive +direct coupling between the material fields and the geometric field, which acts +as an entropy reservoir. Cosmological consequences such as the emergence of the +cosmological constant as well as experimental consequences involving +gravity-quantum condensate interaction are discussed. The energetic aspect of +the theory restores the repertoire of the classical General Relativity up to a +different coupling constant between the fields. +" +Understanding International Migration using Tensor Factorization," Understanding human migration is of great interest to demographers and social +scientists. User generated digital data has made it easier to study such +patterns at a global scale. Geo coded Twitter data, in particular, has been +shown to be a promising source to analyse large scale human migration. But +given the scale of these datasets, a lot of manual effort has to be put into +processing and getting actionable insights from this data. +In this paper, we explore feasibility of using a new tool, tensor +decomposition, to understand trends in global human migration. We model human +migration as a three mode tensor, consisting of (origin country, destination +country, time of migration) and apply CP decomposition to get meaningful low +dimensional factors. Our experiments on a large Twitter dataset spanning 5 +years and over 100M tweets show that we can extract meaningful migration +patterns. +" +Fiber-dependent deautonomization of integrable 2D mappings and discrete Painlevé equations," It is well known that two-dimensional mappings preserving a rational elliptic +fibration, like the Quispel-Roberts-Thompson mappings, can be deautonomized to +discrete Painlevé equations. However, the dependence of this procedure on the +choice of a particular elliptic fiber has not been sufficiently investigated. +In this paper we establish a way of performing the deautonomization for a pair +of an autonomous mapping and a fiber. %By choosing a particular Starting from a +single autonomous mapping but varying the type of a chosen fiber, we obtain +different types of discrete Painlevé equations using this deautonomization +procedure. We also introduce a technique for reconstructing a mapping from the +knowledge of its induced action on the Picard group and some additional +geometric data. This technique allows us to obtain factorized expressions of +discrete Painlevé equations, including the elliptic case. Further, by +imposing certain restrictions on such non-autonomous mappings we obtain new and +simple elliptic difference Painlevé equations, including examples whose +symmetry groups do not appear explicitly in Sakai's classification. +" +ZnO and ZnO$_{1-x}$ based thin film memristors: The effects of oxygen deficiency and thickness in resistive switching behavior," In this study, direct-current reactive sputtered ZnO and ZnO1-x based thin +film (30 nm and 300 nm in thickness) memristor devices were produced and the +effects of oxygen vacancies and thickness on the memristive characteristics +were investigated. The oxygen deficiency of the ZnO1-x structure was confirmed +by SIMS analyses. The memristive characteristics of both the ZnO and ZnO1-x +devices were determined by time dependent current-voltage (I-V-t) measurements. +The distinctive pinched hysteresis I-V loops of memristors were observed in all +the fabricated devices. The typical homogeneous interface and filamentary types +of memristive behaviors were compared. In addition, conduction mechanisms, +on/off ratios and the compliance current were analyzed. The 30 nm ZnO based +devices with native oxygen vacancies showed the best on/off ratio. All of the +devices exhibited dominant Schottky emissions and weaker Poole-Frenkel +conduction mechanisms. Results suggested that the oxygen deficiency was +responsible for the Schottky emission mechanism. Moreover, the compliance +currents of the devices were related to the decreasing power consumption as the +oxygen vacancies increased. +" +Infrastructure Quality Assessment in Africa using Satellite Imagery and Deep Learning," The UN Sustainable Development Goals allude to the importance of +infrastructure quality in three of its seventeen goals. However, monitoring +infrastructure quality in developing regions remains prohibitively expensive +and impedes efforts to measure progress toward these goals. To this end, we +investigate the use of widely available remote sensing data for the prediction +of infrastructure quality in Africa. We train a convolutional neural network to +predict ground truth labels from the Afrobarometer Round 6 survey using Landsat +8 and Sentinel 1 satellite imagery. +Our best models predict infrastructure quality with AUROC scores of 0.881 on +Electricity, 0.862 on Sewerage, 0.739 on Piped Water, and 0.786 on Roads using +Landsat 8. These performances are significantly better than models that +leverage OpenStreetMap or nighttime light intensity on the same tasks. We also +demonstrate that our trained model can accurately make predictions in an unseen +country after fine-tuning on a small sample of images. Furthermore, the model +can be deployed in regions with limited samples to predict infrastructure +outcomes with higher performance than nearest neighbor spatial interpolation. +" +Fréchet Means and Procrustes Analysis in Wasserstein Space," We consider two statistical problems at the intersection of functional and +non-Euclidean data analysis: the determination of a Fréchet mean in the +Wasserstein space of multivariate distributions; and the optimal registration +of deformed random measures and point processes. We elucidate how the two +problems are linked, each being in a sense dual to the other. We first study +the finite sample version of the problem in the continuum. Exploiting the +tangent bundle structure of Wasserstein space, we deduce the Fréchet mean via +gradient descent. We show that this is equivalent to a Procrustes analysis for +the registration maps, thus only requiring successive solutions to pairwise +optimal coupling problems. We then study the population version of the problem, +focussing on inference and stability: in practice, the data are i.i.d. +realisations from a law on Wasserstein space, and indeed their observation is +discrete, where one observes a proxy finite sample or point process. We +construct regularised nonparametric estimators, and prove their consistency for +the population mean, and uniform consistency for the population Procrustes +registration maps. +" +Sliding-Window Superposition Coding:Two-User Interference Channels," A low-complexity coding scheme is developed to achieve the rate region of +maximum likelihood decoding for interference channels. As in the classical +rate-splitting multiple access scheme by Grant, Rimoldi, Urbanke, and Whiting, +the proposed coding scheme uses superposition of multiple codewords with +successive cancellation decoding, which can be implemented using standard +point-to-point encoders and decoders. Unlike rate-splitting multiple access, +which is not rate-optimal for multiple receivers, the proposed coding scheme +transmits codewords over multiple blocks in a staggered manner and recovers +them successively over sliding decoding windows, achieving the single-stream +optimal rate region as well as the more general Han--Kobayashi inner bound for +the two-user interference channel. The feasibility of this scheme in practice +is verified by implementing it using commercial channel codes over the two-user +Gaussian interference channel. +" +Introspective Generative Modeling: Decide Discriminatively," We study unsupervised learning by developing introspective generative +modeling (IGM) that attains a generator using progressively learned deep +convolutional neural networks. The generator is itself a discriminator, capable +of introspection: being able to self-evaluate the difference between its +generated samples and the given training data. When followed by repeated +discriminative learning, desirable properties of modern discriminative +classifiers are directly inherited by the generator. IGM learns a cascade of +CNN classifiers using a synthesis-by-classification algorithm. In the +experiments, we observe encouraging results on a number of applications +including texture modeling, artistic style transferring, face modeling, and +semi-supervised learning. +" +Counting submodules of a module over a noetherian commutative ring," We count the number of submodules of an arbitrary module over a countable +noetherian commutative ring. We give, along the way, a structural description +of meager modules, which are defined as those that do not have the square of a +simple module as subquotient, and deduce in particular a characterization of +uniserial modules over commutative noetherian rings. +" +Modelling the Influence of Cultural Information on Vision-Based Human Home Activity Recognition," Daily life activities, such as eating and sleeping, are deeply influenced by +a person's culture, hence generating differences in the way a same activity is +performed by individuals belonging to different cultures. We argue that taking +cultural information into account can improve the performance of systems for +the automated recognition of human activities. We propose four different +solutions to the problem and present a system which uses a Naive Bayes model to +associate cultural information with semantic information extracted from still +images. Preliminary experiments with a dataset of images of individuals lying +on the floor, sleeping on a futon and sleeping on a bed suggest that: i) +solutions explicitly taking cultural information into account are more accurate +than culture-unaware solutions; and ii) the proposed system is a promising +starting point for the development of culture-aware Human Activity Recognition +methods. +" +Image classification using local tensor singular value decompositions," From linear classifiers to neural networks, image classification has been a +widely explored topic in mathematics, and many algorithms have proven to be +effective classifiers. However, the most accurate classifiers typically have +significantly high storage costs, or require complicated procedures that may be +computationally expensive. We present a novel (nonlinear) classification +approach using truncation of local tensor singular value decompositions (tSVD) +that robustly offers accurate results, while maintaining manageable storage +costs. Our approach takes advantage of the optimality of the representation +under the tensor algebra described to determine to which class an image +belongs. We extend our approach to a method that can determine specific +pairwise match scores, which could be useful in, for example, object +recognition problems where pose/position are different. We demonstrate the +promise of our new techniques on the MNIST data set. +" +Short term unpredictability of high Reynolds number turbulence --- rough dependence on initial data," Short term unpredictability is discovered numerically for high Reynolds +number fluid flows under periodic boundary conditions. Furthermore, the +abundance of the short term unpredictability is also discovered. These +discoveries support our theory that fully developed turbulence is constantly +driven by such short term unpredictability. +" +A Three-Dimensional Mathematical Model of Collagen Contraction," In this paper, we introduce a three-dimensional mathematical model of +collagen contraction with microbuckling based on the two-dimensional model +previously developed by the authors. The model both qualitatively and +quantitatively replicates experimental data including lattice contraction over +a time course of 40 hours for lattices with various cell densities, cell +density profiles within contracted lattices, radial cut angles in lattices, and +cell force propagation within a lattice. The importance of the model lattice +formation and the crucial nature of its connectivity are discussed including +differences with models which do not include microbuckling. The model suggests +that most cells within contracting lattices are engaged in directed motion. +" +"Freeze Casting: A Review of Processing, Microstructure and Properties via the Open Data Repository, FreezeCasting.net"," Freeze-casting produces materials with complex, three-dimensional pore +structures which may be tuned during the solidification process. The range of +potential applications of freeze-cast materials is vast, and includes: +structural materials, biomaterials, filtration membranes, pharmaceuticals, and +foodstuffs. Fabrication of materials with application-specific microstructures +is possible via freeze casting, however, the templating process is highly +complex and the underlying principles are only partially understood. Here, we +report the creation of a freeze-casting experimental data repository, which +contains data extracted from ~800 different freeze-casting papers (as of August +2017). These data pertain to variables that link processing conditions to +microstructural characteristics, and finally, mechanical properties. The aim of +this work is to facilitate broad dissemination of relevant data to +freeze-casting researchers, promote better informed experimental design, and +encourage modeling efforts that relate processing conditions to microstructure +formation and material properties. An initial, systematic analysis of these +data is provided and key processing-structure-property relationships posited in +the freeze-casting literature are discussed and tested against the database. +Tools for data visualization and exploration available through the web +interface are also provided. +" +On existence and approximation of solution of nonlinear Hilfer fractional differential equation," This paper gives the existence and uniqueness results for solution of +fractional differential equations with Hilfer derivative. Using some new +techniques and generalizing the restrictive conditions imposed on considered +function, the iterative scheme for uniformly approximating the solution is +established. +" +The congruence subgroup problem for a family of branch groups," We construct a family of groups which generalize the Hanoi towers group and +study the congruence subgroup problem for the groups in this family. We show +that unlike the Hanoi towers group, the groups in this generalization are just +infinite and have trivial rigid kernel. We also put strict bounds on the branch +kernel. Additionally, we show that these groups have subgroups of finite index +with non-trivial rigid kernel. The only previously known group where this +kernel is non-trivial is the Hanoi towers group and so this adds infinitely +many new examples. Finally, we show that the topological closures of these +groups have Hausdorff dimension arbitrarily close to 1. +" +Character formulae in category $\mathcal O$ for exceptional Lie superalgebras $D(2|1;ζ)$," We establish character formulae for representations of the one-parameter +family of simple Lie superalgebras $D(2|1;\zeta)$. We provide a complete +description of the Verma flag multiplicities of the tilting modules and the +projective modules in the BGG category $\mathcal O$ of $D(2|1;\zeta)$-modules +of integral weights, for any complex parameter $\zeta$. The composition factors +of all Verma modules in $\mathcal O$ are then obtained. +" +The M33 Synoptic Stellar Survey. II. Mira Variables," We present the discovery of 1847 Mira candidates in the Local Group galaxy +M33 using a novel semi-parametric periodogram technique coupled with a Random +Forest classifier. The algorithms were applied to ~2.4x10^5 I-band light curves +previously obtained by the M33 Synoptic Stellar Survey. We derive preliminary +Period-Luminosity relations at optical, near- & mid-infrared wavelengths and +compare them to the corresponding relations in the Large Magellanic Cloud. +" +Photometry of the long period dwarf nova GY Hya," Although comparatively bright, the cataclysmic variable GY Hya has not +attracted much attention in the past. As part of a project to better +characterize such systems photometrically, we observed light curves in white +light, each spanning several hours, at Bronberg Observatory, South Africa, in +2004 and 2005, and at the Observatório do Pico dos Dias, Brazil, in 2014 and +2016. These data permit to study orbital modulations and their variations from +season to season. The orbital period, already known from spectroscopic +observations of Peters & Thorstensen (2005), is confirmed through strong +ellipsoidal variations of the mass donor star in the system and the presence of +eclipses of both components. A refined period of 0.34723972~(6) days and +revised ephemeris are derived. Seasonal changes in the average orbital light +curve can qualitatively be explained by variations of the contribution of a hot +spot to the system light together with changes of the disk radius. The +amplitude of the ellipsoidal variations and the eclipse contact phases permit +to put some constraints on the mass ratio, orbital inclination and the relative +brightness of the primary and secondary components. There are some indications +that the disk radius during quiescence, expressed in units of the component +separation, is smaller than in other dwarf novae. +" +Network Structure Explains the Impact of Attitudes on Voting Decisions," Attitudes can have a profound impact on socially relevant behaviours, such as +voting. However, this effect is not uniform across situations or individuals, +and it is at present difficult to predict whether attitudes will predict +behaviour in any given circumstance. Using a network model, we demonstrate that +(a) more strongly connected attitude networks have a stronger impact on +behaviour, and (b) within any given attitude network, the most central attitude +elements have the strongest impact. We test these hypotheses using data on +voting and attitudes toward presidential candidates in the US presidential +elections from 1980 to 2012. These analyses confirm that the predictive value +of attitude networks depends almost entirely on their level of connectivity, +with more central attitude elements having stronger impact. The impact of +attitudes on voting behaviour can thus be reliably determined before elections +take place by using network analyses. +" +Causal inference for social network data," We extend recent work by van der Laan (2014) on causal inference for causally +connected units to more general social network settings. Our asymptotic results +allow for dependence of each observation on a growing number of other units as +sample size increases. We are not aware of any previous methods for inference +about network members in observational settings that allow the number of ties +per node to increase as the network grows. While previous methods have +generally implicitly focused on one of two possible sources of dependence among +social network observations, we allow for both dependence due to contagion, or +transmission of information across network ties, and for dependence due to +latent similarities among nodes sharing ties. We describe estimation and +inference for causal effects that are specifically of interest in social +network settings. +" +Differential equations and the algebra of confluent spherical functions on semisimple Lie groups," We consider the notion of a confluent spherical function on a connected +semisimple Lie group, $G,$ with finite center and of real rank $1,$ and discuss +the properties and relationship of its algebra with the well-known Schwartz +algebra of spherical functions on $G.$ +" +AMORPH: A statistical program for characterizing amorphous materials by X-ray diffraction," AMORPH utilizes a new Bayesian statistical approach to interpreting X-ray +diffraction results of samples with both crystalline and amorphous components. +AMORPH fits X-ray diffraction patterns with a mixture of narrow and wide +components, simultaneously inferring all of the model parameters and +quantifying their uncertainties. The program simulates background patterns +previously applied manually, providing reproducible results, and significantly +reducing inter- and intra-user biases. This approach allows for the +quantification of amorphous and crystalline materials and for the +characterization of the amorphous component, including properties such as the +centre of mass, width, skewness, and nongaussianity of the amorphous component. +Results demonstrate the applicability of this program for calculating amorphous +contents of volcanic materials and independently modeling their properties in +compositionally variable materials. +" +Unsupervised Adaptation with Domain Separation Networks for Robust Speech Recognition," Unsupervised domain adaptation of speech signal aims at adapting a +well-trained source-domain acoustic model to the unlabeled data from target +domain. This can be achieved by adversarial training of deep neural network +(DNN) acoustic models to learn an intermediate deep representation that is both +senone-discriminative and domain-invariant. Specifically, the DNN is trained to +jointly optimize the primary task of senone classification and the secondary +task of domain classification with adversarial objective functions. In this +work, instead of only focusing on learning a domain-invariant feature (i.e. the +shared component between domains), we also characterize the difference between +the source and target domain distributions by explicitly modeling the private +component of each domain through a private component extractor DNN. The private +component is trained to be orthogonal with the shared component and thus +implicitly increases the degree of domain-invariance of the shared component. A +reconstructor DNN is used to reconstruct the original speech feature from the +private and shared components as a regularization. This domain separation +framework is applied to the unsupervised environment adaptation task and +achieved 11.08% relative WER reduction from the gradient reversal layer +training, a representative adversarial training method, for automatic speech +recognition on CHiME-3 dataset. +" +Robust Power System Dynamic State Estimator with Non-Gaussian Measurement Noise: Part I--Theory," This paper develops the theoretical framework and the equations of a new +robust Generalized Maximum-likelihood-type Unscented Kalman Filter (GM-UKF) +that is able to suppress observation and innovation outliers while filtering +out non-Gaussian measurement noise. Because the errors of the real and reactive +power measurements calculated using Phasor Measurement Units (PMUs) follow +long-tailed probability distributions, the conventional UKF provides strongly +biased state estimates since it relies on the weighted least squares estimator. +By contrast, the state estimates and residuals of our GM-UKF are proved to be +roughly Gaussian, allowing the sigma points to reliably approximate the mean +and the covariance matrices of the predicted and corrected state vectors. To +develop our GM-UKF, we first derive a batch-mode regression form by processing +the predictions and observations simultaneously, where the statistical +linearization approach is used. We show that the set of equations so derived +are equivalent to those of the unscented transformation. Then, a robust +GM-estimator that minimizes a convex Huber cost function while using weights +calculated via Projection Statistics (PS's) is proposed. The PS's are applied +to a two-dimensional matrix that consists of serially correlated predicted +state and innovation vectors to detect observation and innovation outliers. +These outliers are suppressed by the GM-estimator using the iteratively +reweighted least squares algorithm. Finally, the asymptotic error covariance +matrix of the GM-UKF state estimates is derived from the total influence +function. In the companion paper, extensive simulation results will be shown to +verify the effectiveness and robustness of the proposed method. +" +Equivariant Algebraic Index Theorem," We prove a {\Gamma}-equivariant version of the algebraic index theorem, where +{\Gamma} is a discrete group of automorphisms of a formal deformation of a +symplectic manifold. The particular cases of this result are the algebraic +version of the transversal index theorem related to the theorem of A. Connes +and H. Moscovici for hypoelliptic operators and the index theorem for the +extension of the algebra of pseudodifferential operators by a group of +diffeomorphisms of the underlying manifold due to A. Savin, B. Sternin, E. +Schrohe and D. Perrot. +" +Partially-averaged Navier-Stokes (PANS) Method for Turbulence Simulations: Near-wall Modeling and Smooth-surface Separation Computations," The goal of this dissertation is to investigate the PANS model capabilities +in providing significant improvement over RANS predictions at slightly higher +computational expense and producing LES quality results at significantly lower +computational cost. The objectives of this study are: (i) investigate the model +fidelity at a fixed level of scale resolution (Generation1-PANS/G1-PANS) for +smooth surface separation, (ii) Derive the PANS closure model in regions of +resolution variation (Generation2-PANS/G2-PANS), and (iii) Validate G2-PANS +model for attached and separated flows. The separated flows considered in this +study have been designated as critical benchmark flows by NASA CFD study group. +The key contributions of this dissertation are summarized as follows. The +turbulence closure model of varying resolution, G2-PANS, is developed by +deriving mathematically-consistent commutation residues and using energy +conservation principles. The log-layer recovery and accurate computation of +Reynolds stress anisotropy is accomplished by transitioning from steady RANS to +scaled resolved simulations using the G2-PANS model. Finally, several +smooth-separation flows on the NASA turbulence website have been computed with +high degree of accuracy at a significantly reduced computational effort over +LES using the G1-PANS and G2-PANS models. +" +Stochastic Approximation of Smooth and Strongly Convex Functions: Beyond the $O(1/T)$ Convergence Rate," Stochastic approximation (SA) is a classical approach for stochastic convex +optimization. Previous studies have demonstrated that the convergence rate of +SA can be improved by introducing either smoothness or strong convexity +condition. In this paper, we make use of smoothness and strong convexity +simultaneously to boost the convergence rate. Let $\lambda$ be the modulus of +strong convexity, $\kappa$ be the condition number, $F_*$ be the minimal risk, +and $\alpha>1$ be some small constant. First, we demonstrate that, in +expectation, an $O(1/[\lambda T^\alpha] + \kappa F_*/T)$ risk bound is +attainable when $T = \Omega(\kappa^\alpha)$. Thus, when $F_*$ is small, the +convergence rate could be faster than $O(1/[\lambda T])$ and approaches +$O(1/[\lambda T^\alpha])$ in the ideal case. Second, to further benefit from +small risk, we show that, in expectation, an $O(1/2^{T/\kappa}+F_*)$ risk bound +is achievable. Thus, the excess risk reduces exponentially until reaching +$O(F_*)$, and if $F_*=0$, we obtain a global linear convergence. Finally, we +emphasize that our proof is constructive and each risk bound is equipped with +an efficient stochastic algorithm attaining that bound. +" +A new approach to hierarchical data analysis: Targeted maximum likelihood estimation for the causal effect of a cluster-level exposure," We often seek to estimate the impact of an exposure naturally occurring or +randomly assigned at the cluster-level. For example, the literature on +neighborhood determinants of health continues to grow. Likewise, community +randomized trials are applied to learn about real-world implementation, +sustainability, and population effects of interventions with proven +individual-level efficacy. In these settings, individual-level outcomes are +correlated due to shared cluster-level factors, including the exposure, as well +as social or biological interactions between individuals. To flexibly and +efficiently estimate the effect of a cluster-level exposure, we present two +targeted maximum likelihood estimators (TMLEs). The first TMLE is developed +under a non-parametric causal model, which allows for arbitrary interactions +between individuals within a cluster. These interactions include direct +transmission of the outcome (i.e. contagion) and influence of one individual's +covariates on another's outcome (i.e. covariate interference). The second TMLE +is developed under a causal sub-model assuming the cluster-level and +individual-specific covariates are sufficient to control for confounding. +Simulations compare the alternative estimators and illustrate the potential +gains from pairing individual-level risk factors and outcomes during +estimation, while avoiding unwarranted assumptions. Our results suggest that +estimation under the sub-model can result in bias and misleading inference in +an observational setting. Incorporating working assumptions during estimation +is more robust than assuming they hold in the underlying causal model. We +illustrate our approach with an application to HIV prevention and treatment. +" +Excitation spectrum and Density Matrix Renormalization Group iterations," We show that, in certain circumstances, exact excitation energies appear as +locally site-independent (or flat) modes if one records the excitation spectrum +of the effective Hamiltonian while sweeping through the lattice in the +variational Matrix Product State formulation of the Density Matrix +Renormalization Group (DMRG), a remarkable property since the effective +Hamiltonian is only constructed to target the ground state. Conversely, modes +that are very flat over several consecutive iterations are systematically found +to correspond to faithful excitations. We suggest to use this property to +extract accurate information about excited states using the standard ground +state algorithm. The results are spectacular for critical systems, for which +the low-energy conformal tower of states can be obtained very accurately at +essentially no additional cost, as demonstrated by confirming the predictions +of boundary conformal field theory for two simple minimal models - the +transverse-field Ising model and the critical three-state Potts model. This +approach is also very efficient to detect the quasi-degenerate low-energy +excitations in topological phases, and to identify localized excitations in +systems with impurities. Finally, using the variance of the Hamiltonian as a +criterion, we assess the accuracy of the resulting Matrix Product State +representations of the excited states. +" +Minimax estimation in linear models with unknown finite alphabet design," We provide minimax theory for joint estimation of $F$ and $\omega$ in linear +models $Y = F \omega + Z$ where the parameter matrix $\omega$ and the design +matrix $F$ are unknown but the latter takes values in a known finite set. We +show that this allows to separate $F$ and $\omega$ uniquely under weak +identifiability conditions, a task which is not doable, in general. These +assumptions are justified in a variety of applications, ranging from signal +processing to cancer genetics. We then obtain in the noiseless case, that is, +$Z = 0$, stable recovery of $F$ and $\omega$ in a neighborhood of $Y$. Based on +this, we show for Gaussian error matrix $Z$ that the LSE attains minimax rates +for both, prediction error of $F \omega$ and estimation error of $F$ and +$\omega$, separately. Due to the finite alphabet, estimation of $F$ amounts to +a classification problem, where we show that the classification error +$P(\hat{F} \neq F)$ decreases exponentially in the dimension of one component +of $Y$. +" +Rooted trees with the same plucking polynomial," In this paper we give a sufficient and necessary condition for two rooted +trees with the same plucking polynomial. Furthermore, we give a criteria for a +sequence of non-negative integers to be realized as a rooted tree. +" +CityPersons: A Diverse Dataset for Pedestrian Detection," Convnets have enabled significant progress in pedestrian detection recently, +but there are still open questions regarding suitable architectures and +training data. We revisit CNN design and point out key adaptations, enabling +plain FasterRCNN to obtain state-of-the-art results on the Caltech dataset. +To achieve further improvement from more and better data, we introduce +CityPersons, a new set of person annotations on top of the Cityscapes dataset. +The diversity of CityPersons allows us for the first time to train one single +CNN model that generalizes well over multiple benchmarks. Moreover, with +additional training with CityPersons, we obtain top results using FasterRCNN on +Caltech, improving especially for more difficult cases (heavy occlusion and +small scale) and providing higher localization quality. +" +The Better Half of Selling Separately," Separate selling of two independent goods is shown to yield at least 62% of +the optimal revenue, and at least 73% when the goods satisfy the Myerson +regularity condition. This improves the 50% result of Hart and Nisan (2017, +originally circulated in 2012). +" +Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields," Generative adversarial networks (GANs) evolved into one of the most +successful unsupervised techniques for generating realistic images. Even though +it has recently been shown that GAN training converges, GAN models often end up +in local Nash equilibria that are associated with mode collapse or otherwise +fail to model the target distribution. We introduce Coulomb GANs, which pose +the GAN learning problem as a potential field of charged particles, where +generated samples are attracted to training set samples but repel each other. +The discriminator learns a potential field while the generator decreases the +energy by moving its samples along the vector (force) field determined by the +gradient of the potential field. Through decreasing the energy, the GAN model +learns to generate samples according to the whole target distribution and does +not only cover some of its modes. We prove that Coulomb GANs possess only one +Nash equilibrium which is optimal in the sense that the model distribution +equals the target distribution. We show the efficacy of Coulomb GANs on a +variety of image datasets. On LSUN and celebA, Coulomb GANs set a new state of +the art and produce a previously unseen variety of different samples. +" +Involution on pseudoisotopy spaces and the space of the nonnegatively curved metrics," We prove that certain involutions defined by Vogell and Burghelea-Fiedorowicz +on the rational algebraic K-theory of spaces coincide. This gives a way to +compute the positive and negative eigenspaces of the involution on rational +homotopy groups of pseudoisotopy spaces from the involution on rational +$S^{1}$--homology group of the free loop space of a simply-connected manifold. +As an application, we give explicit dimensions of the open manifolds $V$ that +appear in Belegradek-Farrell-Kapovitch's work for which the spaces of complete +nonnegatively curved metrics on $V$ have nontrivial rational homotopy groups. +" +A class of semisimple Hopf algebras acting on quantum polynomial algebras," We construct a class of non-commutative, non-cocommutative, semisimple Hopf +algebras of dimension $2n^2$ and present conditions to define an inner faithful +action of these Hopf algebras on quantum polynomial algebras, providing, in +this way, more examples of semisimple Hopf actions which do not factor through +group actions. Also, under certain condition, we classify the inner faithful +Hopf actions of the Kac-Paljutkin Hopf algebra of dimension $8$, $H_8$, on the +quantum plane. +" +Fusion systems of blocks with nontrivial strongly closed subgroups," In this paper, we find some exotic fusion systems which have non-trivial +strongly closed subgroups, and we prove these fusion systems are also not +realizable by p-blocks of finite groups. +" +Nonequilibrium steady states and transient dynamics of conventional superconductors under phonon driving," We perform a systematic analysis of the influence of phonon driving on the +superconducting Holstein model coupled to heat baths by studying both the +transient dynamics and the nonequilibrium steady state (NESS) in the weak and +strong electron-phonon coupling regimes. Our study is based on the +nonequilibrium dynamical mean-field theory, and for the NESS we present a +Floquet formulation adapted to electron-phonon systems. The analysis of the +phonon propagator suggests that the effective attractive interaction can be +strongly enhanced in a parametric resonant regime because of the Floquet side +bands of phonons. While this may be expected to enhance the superconductivity +(SC), our fully self-consistent calculations, which include the effects of +heating and nonthermal distributions, show that the parametric phonon driving +generically results in a suppression or complete melting of the SC order. In +the strong coupling regime, the NESS always shows a suppression of the SC gap, +the SC order parameter and the superfluid density as a result of the driving, +and this tendency is most prominent at the parametric resonance. Using the +real-time nonequilibrium DMFT formalism, we also study the dynamics towards the +NESS, which shows that the heating effect dominates the transient dynamics, and +SC is weakened by the external modulations, in particular at the parametric +resonance. In the weak coupling regime, we find that the SC fluctuations above +the transition temperature are generally weakened under the driving. The +strongest suppression occurs again around the parametric resonances because of +the efficient energy absorption. +" +Optimal Portfolio in Intraday Electricity Markets Modelled by Lévy-Ornstein-Uhlenbeck Processes," We study an optimal portfolio problem designed for an agent operating in +intraday electricity markets. The investor is allowed to trade in a single +risky asset modelling the continuously traded power and aims to maximize the +expected terminal utility of his wealth. We assume a mean-reverting additive +process to drive the power prices. In the case of logarithmic utility, we +reduce the fully non-linear Hamilton-Jacobi-Bellman equation to a linear +parabolic integro-differential equation, for which we explicitly exhibit a +classical solution in two cases of modelling interest. The optimal strategy is +given implicitly as the solution of an integral equation, which is possible to +solve numerically as well as to describe analytically. An analysis of two +different approximations for the optimal policy is provided. Finally, we +perform a numerical test by adapting the parameters of a popular electricity +spot price model. +" +Topological phase transitions in small mesoscopic chiral p-wave superconductors," Spin-triplet chiral p-wave superconductivity is typically described by a +two-component order parameter, and as such is prone to unique emergent effects +when compared to the standard single-component superconductors. Here we present +the equilibrium phase diagram for small mesoscopic chiral p-wave +superconducting disks in the presence of magnetic field, obtained by solving +the microscopic Bogoliubov-de Gennes equations self-consistently. In the +ultra-small limit, the cylindrically-symmetric giant-vortex states are the +ground state of the system. However, with increasing sample size, the +cylindrical symmetry is broken as the two components of the order parameter +segregate into domains, and the number of fragmented domain walls between them +characterizes the resulting states. Such domain walls are topological defects +unique for the p-wave order, and constitute a dominant phase in the mesoscopic +regime. Moreover, we find two possible types of domain walls, identified by +their chirality-dependent interaction with the edge states. +" +Squeezing on momentum states for atom interferometry," We propose and analyse a method that allows for the production of squeezed +states of the atomic center-of-mass motion that can be injected into an atom +interferometer. Our scheme employs dispersive probing in a ring resonator on a +narrow transition of strontium atoms in order to provide a collective +measurement of the relative population of two momentum states. We show that +this method is applicable to a Bragg diffraction-based atom interferometer with +large diffraction orders. The applicability of this technique can be extended +also to small diffraction orders and large atom numbers by inducing atomic +transparency at the frequency of the probe field, reaching an interferometer +phase resolution scaling $\Delta\phi\sim N^{-3/4}$, where $N$ is the atom +number. We show that for realistic parameters it is possible to obtain a 20 dB +gain in interferometer phase estimation compared to the Standard Quantum Limit. +" +Synthesis of Near-regular Natural Textures," Texture synthesis is widely used in the field of computer graphics, vision, +and image processing. In the present paper, a texture synthesis algorithm is +proposed for near-regular natural textures with the help of a representative +periodic pattern extracted from the input textures using distance matching +function. Local texture statistics is then analyzed against global texture +statistics for non-overlapping windows of size same as periodic pattern size +and a representative periodic pattern is extracted from the image and used for +texture synthesis, while preserving the global regularity and visual +appearance. Validation of the algorithm based on experiments with synthetic +textures whose periodic pattern sizes are known and containing camouflages / +defects proves the strength of the algorithm for texture synthesis and its +application in detection of camouflages / defects in textures. +" +The Uncertainty Bellman Equation and Exploration," We consider the exploration/exploitation problem in reinforcement learning. +For exploitation, it is well known that the Bellman equation connects the value +at any time-step to the expected value at subsequent time-steps. In this paper +we consider a similar \textit{uncertainty} Bellman equation (UBE), which +connects the uncertainty at any time-step to the expected uncertainties at +subsequent time-steps, thereby extending the potential exploratory benefit of a +policy beyond individual time-steps. We prove that the unique fixed point of +the UBE yields an upper bound on the variance of the posterior distribution of +the Q-values induced by any policy. This bound can be much tighter than +traditional count-based bonuses that compound standard deviation rather than +variance. Importantly, and unlike several existing approaches to optimism, this +method scales naturally to large systems with complex generalization. +Substituting our UBE-exploration strategy for $\epsilon$-greedy improves DQN +performance on 51 out of 57 games in the Atari suite. +" +Syntax-Preserving Belief Change Operators for Logic Programs," Recent methods have adapted the well-established AGM and belief base +frameworks for belief change to cover belief revision in logic programs. In +this study here, we present two new sets of belief change operators for logic +programs. They focus on preserving the explicit relationships expressed in the +rules of a program, a feature that is missing in purely semantic approaches +that consider programs only in their entirety. In particular, operators of the +latter class fail to satisfy preservation and support, two important properties +for belief change in logic programs required to ensure intuitive results. +We address this shortcoming of existing approaches by introducing partial +meet and ensconcement constructions for logic program belief change, which +allow us to define syntax-preserving operators that satisfy preservation and +support. Our work is novel in that our constructions not only preserve more +information from a logic program during a change operation than existing ones, +but they also facilitate natural definitions of contraction operators, the +first in the field to the best of our knowledge. +In order to evaluate the rationality of our operators, we translate the +revision and contraction postulates from the AGM and belief base frameworks to +the logic programming setting. We show that our operators fully comply with the +belief base framework and formally state the interdefinability between our +operators. We further propose an algorithm that is based on modularising a +logic program to reduce partial meet and ensconcement revisions or contractions +to performing the operation only on the relevant modules of that program. +Finally, we compare our approach to two state-of-the-art logic program revision +methods and demonstrate that our operators address the shortcomings of one and +generalise the other method. +" +Mapping degrees between spherical $3$-manifolds," Let $D(M,N)$ be the set of integers that can be realized as the degree of a +map between two closed connected orientable manifolds $M$ and $N$ of the same +dimension. For closed $3$-manifolds with $S^3$-geometry $M$ and $N$, every such +degree $deg f\equiv \overline{deg}\psi$ $(|\pi_1(N)|)$ where $0\le +\overline{deg}\psi <|\pi_1(N)|$ and $\overline{deg}\psi$ only depends on the +induced homomorphism $\psi=f_{\pi}$ on the fundamental group. In this paper, we +calculate explicitly the set $\{\overline{deg}\psi\}$ when $\psi$ is surjective +and then we show how to determine $\overline{deg}(\psi)$ for arbitrary +homomorphisms. This leads to the determination of the set $D(M,N)$. +" +Estimating graph parameters with random walks," An algorithm observes the trajectories of random walks over an unknown graph +$G$, starting from the same vertex $x$, as well as the degrees along the +trajectories. For all finite connected graphs, one can estimate the number of +edges $m$ up to a bounded factor in +$O\left(t_{\mathrm{rel}}^{3/4}\sqrt{m/d}\right)$ steps, where +$t_{\mathrm{rel}}$ is the relaxation time of the lazy random walk on $G$ and +$d$ is the minimum degree in $G$. Alternatively, $m$ can be estimated in +$O\left(t_{\mathrm{unif}} +t_{\mathrm{rel}}^{5/6}\sqrt{n}\right)$, where $n$ is +the number of vertices and $t_{\mathrm{unif}}$ is the uniform mixing time on +$G$. The number of vertices $n$ can then be estimated up to a bounded factor in +an additional $O\left(t_{\mathrm{unif}}\frac{m}{n}\right)$ steps. Our +algorithms are based on counting the number of intersections of random walk +paths $X,Y$, i.e. the number of pairs $(t,s)$ such that $X_t=Y_s$. This +improves on previous estimates which only consider collisions (i.e., times $t$ +with $X_t=Y_t$). We also show that the complexity of our algorithms is optimal, +even when restricting to graphs with a prescribed relaxation time. Finally, we +show that, given either $m$ or the mixing time of $G$, we can compute the +""other parameter"" with a self-stopping algorithm. +" +RoI-based Robotic Grasp Detection in Object Overlapping Scenes Using Convolutional Neural Network," Grasp detection is an essential skill for widespread use of robots. Recent +works demonstrate the advanced performance of Convolutional Neural Network +(CNN) on robotic grasp detection. However, a significant shortcoming of +existing grasp detection algorithms is that they all ignore the affiliation +between grasps and targets. In this paper, we propose a robotic grasp detection +algorithm based on Region of Interest (RoI) to simultaneously detect targets +and their grasps in object overlapping scenes. Our proposed algorithm uses +Regions of Interest (RoIs) to detect grasps while doing classification and +location regression of targets. To train the network, we contribute a much +bigger multi-object grasp dataset than Cornell Grasp Dataset, which is based on +Visual Manipulation Relationship Dataset. Experimental results demonstrate that +our algorithm achieves 24.9% miss rate at 1FPPI and 68.2% mAP with grasp on our +dataset. Robotic experiments demonstrate that our proposed algorithm can help +robots grasp specified target in multi-object scenes at 84% success rate. +" +Light Source Point Cluster Selection Based Atmosphere Light Estimation," Atmosphere light value is a highly critical parameter in defogging algorithms +that are based on an atmosphere scattering model. Any error in atmosphere light +value will produce a direct impact on the accuracy of scattering computation +and thus bring chromatic distortion to restored images. To address this +problem, this paper propose a method that relies on clustering statistics to +estimate atmosphere light value. It starts by selecting in the original image +some potential atmosphere light source points, which are grouped into point +clusters by means of clustering technique. From these clusters, a number of +clusters containing candidate atmosphere light source points are selected, the +points are then analyzed statistically, and the cluster containing the most +candidate points is used for estimating atmosphere light value. The mean +brightness vector of the candidate atmosphere light points in the chosen point +cluster is taken as the estimate of atmosphere light value, while their +geometric center in the image is accepted as the location of atmosphere light. +Experimental results suggest that this statistics clustering method produces +more accurate atmosphere brightness vectors and light source locations. This +accuracy translates to, from a subjective perspective, more natural defogging +effect on the one hand and to the improvement in various objective image +quality indicators on the other hand. +" +Complete Submodularity Characterization in the Comparative Independent Cascade Model," We study the propagation of comparative ideas or items in social networks. A +full characterization for submodularity in the comparative independent cascade +(Com-IC) model of two-idea cascade is given, for competing ideas and +complementary ideas respectively, with or without reconsideration. We further +introduce One-Shot model where agents show less patience toward ideas, and show +that in One-Shot model, only the strongest idea spreads with submodularity. +" +Physics-guided probabilistic modeling of extreme precipitation under climate change," Earth System Models (ESMs) are the state of the art for projecting the +effects of climate change. However, longstanding uncertainties in their ability +to simulate regional and local precipitation extremes and related processes +inhibit decision making. Stakeholders would be best supported by probabilistic +projections of changes in extreme precipitation at relevant space-time scales. +Here we propose an empirical Bayesian model that extends an existing skill and +consensus based weighting framework and test the hypothesis that nontrivial, +physics-guided measures of ESM skill can help produce reliable probabilistic +characterization of climate extremes. Specifically, the model leverages +knowledge of physical relationships between temperature, atmospheric moisture +capacity, and extreme precipitation intensity to iteratively weight and combine +ESMs and estimate probability distributions of return levels. Out-of-sample +validation shows evidence that the Bayesian model is a sound method for +deriving reliable probabilistic projections. Beyond precipitation extremes, the +framework may be a basis for a generic, physics-guided approach to modeling +probability distributions of climate variables in general, extremes or +otherwise. +" +LISA Detection of Binary Black Holes in the Milky Way Galaxy," Using the black hole merger rate inferred from LIGO, we calculate the +abundance of tightly bound binary black holes in the Milky Way galaxy. Binaries +with a small semimajor axis ($\lesssim 10 R_\odot$) originate at larger +separations through conventional formation mechanisms and evolve as a result of +gravitational wave emission. We find that LISA could detect them in the Milky +Way. We also identify possible X-ray signatures of such binaries. +" +Compacton solutions and (non)integrability for nonlinear evolutionary PDEs associated with a chain of prestressed granules," We present the results of study of a nonlinear evolutionary PDE (more +precisely, a one-parameter family of PDEs) associated with the chain of +pre-stressed granules. The PDE in question supports solitary waves of +compression and rarefaction (bright and dark compactons) and can be written in +Hamiltonian form. We investigate {\em inter alia} integrability properties of +this PDE and its generalized symmetries and conservation laws. +For the compacton solutions we perform a stability test followed by the +numerical study. In particular, we simulate the temporal evolution of a single +compacton, and the interactions of compacton pairs. The results of numerical +simulations performed for our model are compared with the numerical evolution +of corresponding Cauchy data for the discrete model of chain of pre-stressed +elastic granules. +" +FormuLog: Datalog for static analysis involving logical formulae," Datalog has become a popular language for writing static analyses. Because +Datalog is very limited, some implementations of Datalog for static analysis +have extended it with new language features. However, even with these features +it is hard or impossible to express a large class of analyses because they use +logical formulae to represent program state. FormuLog fills this gap by +extending Datalog to represent, manipulate, and reason about logical formulae. +We have used FormuLog to implement declarative versions of symbolic execution +and abstract model checking, analyses previously out of the scope of +Datalog-based languages. While this paper focuses on the design of FormuLog and +one of the analyses we have implemented in it, it also touches on a prototype +implementation of the language and identifies performance optimizations that we +believe will be necessary to scale FormuLog to real-world static analysis +problems. +" +A Double Parametric Bootstrap Test for Topic Models," Non-negative matrix factorization (NMF) is a technique for finding latent +representations of data. The method has been applied to corpora to construct +topic models. However, NMF has likelihood assumptions which are often violated +by real document corpora. We present a double parametric bootstrap test for +evaluating the fit of an NMF-based topic model based on the duality of the KL +divergence and Poisson maximum likelihood estimation. The test correctly +identifies whether a topic model based on an NMF approach yields reliable +results in simulated and real data. +" +Heat flows inferred from a Parker's-like formula for stable or quasi-stable continents," Surface heat flow is a key parameter for the geothermal structure, rheology, +and hence the dynamics of continents. However, the coverage of heat flow +measurements is still poor in many continental areas. By transforming the +stable nonlinear heat conduction equation into a Poisson's one, we develop a +method to infer surface heat flow for a stable or quasi-stable continent from a +Parker's-like formula. This formula provides the relationship between the +Fourier transform of surface heat flow and the sum of the Fourier transform of +the powers of geometry for the heat production (HP) interface in the +continental lithosphere. Once the interface geometry is known, one to three +dimensional distribution of the surface heat flow can be calculated accurately +by this formula. As a case study, we estimate the three-dimensional surface +heat flows for the Ordos geological block and its adjacent areas in China on a +$1^\circ \times 1^\circ$ grid based on a simple layered constant HP model. +Comparing to the measurements, most relative errors of the heat flows inferred +are less than 20\%, showing this method is a favorable way to estimate surface +heat flow for stable or quasi-stable continental regions where measurements are +rare or absent. +" +Real-time 3D Reconstruction on Construction Site using Visual SLAM and UAV," 3D reconstruction can be used as a platform to monitor the performance of +activities on construction site, such as construction progress monitoring, +structure inspection and post-disaster rescue. Comparing to other sensors, RGB +image has the advantages of low-cost, texture rich and easy to implement that +has been used as the primary method for 3D reconstruction in construction +industry. However, the image-based 3D reconstruction always requires extended +time to acquire and/or to process the image data, which limits its application +on time critical projects. Recent progress in Visual Simultaneous Localization +and Mapping (SLAM) make it possible to reconstruct a 3D map of construction +site in real-time. Integrated with Unmanned Aerial Vehicle (UAV), the obstacles +areas that are inaccessible for the ground equipment can also be sensed. +Despite these advantages of visual SLAM and UAV, until now, such technique has +not been fully investigated on construction site. Therefore, the objective of +this research is to present a pilot study of using visual SLAM and UAV for +real-time construction site reconstruction. The system architecture and the +experimental setup are introduced, and the preliminary results and the +potential applications using Visual SLAM and UAV on construction site are +discussed. +" +The Space of Transferable Adversarial Examples," Adversarial examples are maliciously perturbed inputs designed to mislead +machine learning (ML) models at test-time. They often transfer: the same +adversarial example fools more than one model. +In this work, we propose novel methods for estimating the previously unknown +dimensionality of the space of adversarial inputs. We find that adversarial +examples span a contiguous subspace of large (~25) dimensionality. Adversarial +subspaces with higher dimensionality are more likely to intersect. We find that +for two different models, a significant fraction of their subspaces is shared, +thus enabling transferability. +In the first quantitative analysis of the similarity of different models' +decision boundaries, we show that these boundaries are actually close in +arbitrary directions, whether adversarial or benign. We conclude by formally +studying the limits of transferability. We derive (1) sufficient conditions on +the data distribution that imply transferability for simple model classes and +(2) examples of scenarios in which transfer does not occur. These findings +indicate that it may be possible to design defenses against transfer-based +attacks, even for models that are vulnerable to direct attacks. +" +Sensitivity of the entanglement spectrum to boundary conditions as a characterization of the phase transition from delocalization to localization," Sensitivity of entanglement Hamiltonian spectrum to boundary conditions is +considered as a phase detection parameter for delocalized-localized phase +transition. By employing one-dimensional models that undergo +delocalized-localized phase transition, we study the shift in the entanglement +energies and the shift in the entanglement entropy when we change boundary +conditions from periodic to anti-periodic. Specifically, we show that both +these quantities show a change of several orders of magnitude at the transition +point in the models considered. Therefore, this shift can be used to indicate +the phase transition points in the models. We also show that both these +quantities can be used to determine \emph{mobility edges} separating localized +and delocalized states. +" +"Radial metal abundance profiles in the intra-cluster medium of cool-core galaxy clusters, groups, and ellipticals"," The hot intra-cluster medium (ICM) permeating galaxy clusters and groups is +not pristine, as it is continuously enriched by metals synthesised in Type Ia +(SNIa) and core-collapse (SNcc) supernovae since the major epoch of star +formation (z ~ 2-3). The cluster/group enrichment history and the mechanisms +responsible for releasing and mixing the metals can be probed via the radial +distribution of SNIa and SNcc products within the ICM. In this paper, we use +deep XMM-Newton/EPIC observations from a sample of 44 nearby cool-core galaxy +clusters, groups, and ellipticals (CHEERS) to constrain the average radial O, +Mg, Si, S, Ar, Ca, Fe, and Ni abundance profiles. The radial distributions of +all these elements, averaged over a large sample for the first time, represent +the best constrained profiles available currently. We find an overall decrease +of the Fe abundance with radius out to ~$0.9 r_{500}$ and ~$0.6 r_{500}$ for +clusters and groups, respectively, in good agreement with predictions from the +most recent hydrodynamical simulations. The average radial profiles of all the +other elements (X) are also centrally peaked and, when rescaled to their +average central X/Fe ratios, follow well the Fe profile out to at least +~0.5$r_{500}$. Using two sets of SNIa and SNcc yield models reproducing well +the X/Fe abundance pattern in the core, we find that, as predicted by recent +simulations, the relative contribution of SNIa (SNcc) to the total ICM +enrichment is consistent with being uniform at all radii, both for clusters and +groups. In addition to implying that the central metal peak is balanced between +SNIa and SNcc, our results suggest that the enriching SNIa and SNcc products +must share the same origin, and that the delay between the bulk of the SNIa and +SNcc explosions must be shorter than the timescale necessary to diffuse out the +metals. +" +Edge states in dynamical superlattices," We address edge states and rich localization regimes available in the +one-dimensional (1D) dynamically modulated superlattices, both theoretically +and numerically. In contrast to conventional lattices with straight waveguides, +the quasi-energy band of infinite modulated superlattice is periodic not only +in the transverse Bloch momentum, but it also changes periodically with +increase of the coupling strength between waveguides. Due to collapse of +quasi-energy bands dynamical superlattices admit known dynamical localization +effect. If, however, such a lattice is truncated, periodic longitudinal +modulation leads to appearance of specific edge states that exist within +certain periodically spaced intervals of coupling constants. We discuss unusual +transport properties of such truncated superlattices and illustrate different +excitation regimes and enhanced robustness of edge states in them, that is +associated with topology of the quasi-energy band. +" +"On the Monitoring of Decentralized Specifications Semantics, Properties, Analysis, and Simulation"," We define two complementary approaches to monitor decentralized systems. The +first relies on those with a centralized specification, i.e, when the +specification is written for the behavior of the entire system. To do so, our +approach introduces a data-structure that i) keeps track of the execution of an +automaton, ii) has predictable parameters and size, and iii) guarantees strong +eventual consistency. The second approach defines decentralized specifications +wherein multiple specifications are provided for separate parts of the system. +We study two properties of decentralized specifications pertaining to +monitorability and compatibility between specification and architecture. We +also present a general algorithm for monitoring decentralized specifications. +We map three existing algorithms to our approaches and provide a framework for +analyzing their behavior. Furthermore, we introduce THEMIS, a framework for +designing such decentralized algorithms and simulating their behavior. We show +the usage of THEMIS to compare multiple algorithms and verify the trends +predicted by the analysis by studying two scenarios: a synthetic benchmark and +a real example. +" +Morphology and the Color-Mass Diagram as Clues to Galaxy Evolution at z~1," We study the significance of mergers in the quenching of star formation in +galaxies at z~1 by examining their color-mass distributions for different +morphology types. We perform two-dimensional light profile fits to GOODS iz +images of ~5000 galaxies and X-ray selected active galactic nucleus (AGN) hosts +in the CANDELS/GOODS-north and south fields in the redshift range 0.71$, +and by D.K.Sanadze, Sh.V.Kheladze in Orlizc class. Note, that presence of two +or more ""free"" components in the index $n$ (as follows from the results by +Ch.Fefferman (1971)) does not guarantee the convergence almost everywhere of +$S_n(x;f)$ for $N\geq 3$ even in the class of continuous functions. +" +"Emergent phases in iron pnictides: Double-Q antiferromagnetism, charge order and enhanced nematic correlations"," Electron correlations produce a rich phase diagram in the iron pnictides. +Earlier theoretical studies on the correlation effect demonstrated how quantum +fluctuations weaken and concurrently suppress a $C_2$-symmetric single-Q +antiferromagnetic order and a nematic order. Here we examine the emergent +phases near the quantum phase transition. For a $C_4$-symmetric collinear +double-Q antiferromagnetic order, we show that it is accompanied by both a +charge order and an enhanced nematic susceptibility. Our results provide +understanding for several intriguing recent experiments in hole-doped iron +arsenides, and bring out common physics that underlies the different magnetic +phases of various iron-based superconductors. +" +Synergistic effects in threshold models on networks," Network structure can have significant effects on the propagation of +diseases, memes, and information on social networks. Such effects depend on the +specific type of dynamical process that affects the nodes and edges of a +network, and it is important to develop tractable models of spreading processes +on networks to explore how network structure affects dynamics. In this paper, +we incorporate the idea of \emph{synergy} into a two-state (""active"" or +""passive"") threshold model of social influence on networks. Our model's update +rule is deterministic, and the influence of each meme-carrying (i.e., active) +neighbor can --- depending on a parameter --- either be enhanced or inhibited +by an amount that depends on the number of active neighbors of a node. Such a +synergistic system models social behavior in which the willingness to adopt +either accelerates or saturates depending on the number of neighbors who have +adopted that behavior. We illustrate that the synergy parameter in our model +has a crucial effect on system dynamics, as it determines whether degree-$k$ +nodes are possible or impossible to activate. We simulate synergistic meme +spreading on both random-graph models and networks constructed from empirical +data. Using a local-tree approximation, we examine the spreading of synergistic +memes and find good agreement on all but one of the networks on which we +simulate spreading. We find for any network and for a broad family of +synergistic models that one can predict which synergy-parameter values allow +degree-$k$ nodes to be activated. +" +Bayesian stochastic blockmodeling," This chapter provides a self-contained introduction to the use of Bayesian +inference to extract large-scale modular structures from network data, based on +the stochastic blockmodel (SBM), as well as its degree-corrected and +overlapping generalizations. We focus on nonparametric formulations that allow +their inference in a manner that prevents overfitting, and enables model +selection. We discuss aspects of the choice of priors, in particular how to +avoid underfitting via increased Bayesian hierarchies, and we contrast the task +of sampling network partitions from the posterior distribution with finding the +single point estimate that maximizes it, while describing efficient algorithms +to perform either one. We also show how inferring the SBM can be used to +predict missing and spurious links, and shed light on the fundamental +limitations of the detectability of modular structures in networks. +" +Hardy Spaces ($00}(\int_{\Gamma} |F(\zeta+\mathrm{i}\tau)|^p +|\,\mathrm{d}\zeta|)^{\frac1p}< \infty$. We denote the conformal mapping from +$\mathbb{C}_+$ onto $\Omega_+$ as $\Phi$, and prove that, $H^p(\Omega_+)$ is +isomorphic to $H^p(\mathbb{C}_+)$, the classical Hardy space on the upper half +plane~$\mathbb{C}_+$, under the mapping $T\colon F\to F(\Phi)\cdot +(\Phi')^{\frac1p}$. Besides, $T$ and $T^{-1}$ are both bounded. We also prove +that if $F(w)\in H^p(\Omega_+)$, then $F(w)$ has non-tangential boundary limit +$F(\zeta)$ a.e. on $\Gamma$, and, if $1\leqslant p< \infty$, $F(w)$ is the +Cauchy integral on $\Gamma$ of $F(\zeta)$. +" +A Crevice on the Crane Beach: Finite-Degree Predicates," First-order logic (FO) over words is shown to be equiexpressive with FO +equipped with a restricted set of numerical predicates, namely the order, a +binary predicate MSB$_0$, and the finite-degree predicates: FO[Arb] = FO[<, +MSB$_0$, Fin]. +The Crane Beach Property (CBP), introduced more than a decade ago, is true of +a logic if all the expressible languages admitting a neutral letter are +regular. +Although it is known that FO[Arb] does not have the CBP, it is shown here +that the (strong form of the) CBP holds for both FO[<, Fin] and FO[<, MSB$_0$]. +Thus FO[<, Fin] exhibits a form of locality and the CBP, and can still express +a wide variety of languages, while being one simple predicate away from the +expressive power of FO[Arb]. The counting ability of FO[<, Fin] is studied as +an application. +" +Dirichlet's and Thomson's principles for non-selfadjoint elliptic operators with application to non-reversible metastable diffusion processes," We present two variational formulae for the capacity in the context of +non-selfadjoint elliptic operators. The minimizers of these variational +problems are expressed as solutions of boundary-value elliptic equations. We +use these principles to provide a sharp estimate for the transition times +between two different wells for non-reversible diffusion processes. This +estimate permits to describe the metastable behavior of the system. +" +Real-space investigation of short-range magnetic correlations in fluoride pyrochlores NaCaCo$_2$F$_7$ and NaSrCo$_2$F$_7$ with magnetic pair distribution function analysis," We present time-of-flight neutron total scattering and polarized neutron +scattering measurements of the magnetically frustrated compounds +NaCaCo$_2$F$_7$ and NaSrCo$_2$F$_7$, which belong to a class of recently +discovered pyrochlore compounds based on transition metals and fluorine. The +magnetic pair distribution function (mPDF) technique is used to analyze and +model the total scattering data in real space. We find that a +previously-proposed model of short-range XY-like correlations with a length +scale of 10-15 \AA, combined with nearest-neighbor collinear antiferromagnetic +correlations, accurately describes the mPDF data at low temperature, confirming +the magnetic ground state in these materials. This model is further verified by +the polarized neutron scattering data. From an analysis of the temperature +dependence of the mPDF and polarized neutron scattering data, we find that +short-range correlations persist on the nearest-neighbor length scale up to 200 +K, approximately two orders of magnitude higher than the spin freezing +temperatures of these compounds. These results highlight the opportunity +presented by these new pyrochlore compounds to study the effects of geometric +frustration at relatively high temperatures, while also advancing the mPDF +technique and providing a novel opportunity to investigate a genuinely +short-range-ordered magnetic ground state directly in real space. +" +Embedding is not Cipher: Understanding the risk of embedding leakages," Machine Learning (ML) already has been integrated into all kinds of systems, +helping developers to solve problems with even higher accuracy than human +beings. However, when integrating ML models into a system, developers may +accidentally take not enough care of the outputs of ML models, mainly because +of their unfamiliarity with ML and AI, resulting in severe consequences like +hurting data owners' privacy. In this work, we focus on understanding the risks +of abusing embeddings of ML models, an important and popular way of using ML. +To show the consequence, we reveal several kinds of channels in which +embeddings are accidentally leaked. As our study shows, a face verification +system deployed by a government organization leaking only distance to authentic +users allows an attacker to exactly recover the embedding of the verifier's +pre-installed photo. Further, as we discovered, with the leaked embedding, +attackers can easily recover the input photo with negligible quality losses, +indicating devastating consequences to users' privacy. This is achieved with +our devised GAN-like structure model, which showed 93.65% success rate on +popular face embedding model under black box assumption. +" +Anatomy of an online misinformation network," Massive amounts of fake news and conspiratorial content have spread over +social media before and after the 2016 US Presidential Elections despite +intense fact-checking efforts. How do the spread of misinformation and +fact-checking compete? What are the structural and dynamic characteristics of +the core of the misinformation diffusion network, and who are its main +purveyors? How to reduce the overall amount of misinformation? To explore these +questions we built Hoaxy, an open platform that enables large-scale, systematic +studies of how misinformation and fact-checking spread and compete on Twitter. +Hoaxy filters public tweets that include links to unverified claims or +fact-checking articles. We perform k-core decomposition on a diffusion network +obtained from two million retweets produced by several hundred thousand +accounts over the six months before the election. As we move from the periphery +to the core of the network, fact-checking nearly disappears, while social bots +proliferate. The number of users in the main core reaches equilibrium around +the time of the election, with limited churn and increasingly dense +connections. We conclude by quantifying how effectively the network can be +disrupted by penalizing the most central nodes. These findings provide a first +look at the anatomy of a massive online misinformation diffusion network. +" +Injection Bucket Jitter Compensation Using Phase Lock System At Fermilab Booster," The extraction bucket position in the Fermilab Booster is controlled with a +cogging process that involves the comparison of the Booster RF count and the +Recycler Ring revolution marker. A one RF bucket jitter in the extraction +bucket position results from the variability of the process that phase matches +the Booster to the Recycler. However, the new slow phase lock process used to +lock the frequency and phase of the Booster RF to the Recycler RF has been made +digital and programmable and has been modified to correct the extraction notch +position. The beam loss at the Recycler injection has been reduced by 20%. Beam +studies and the phase lock system will be discussed in this paper. +" +Collaborative Deep Learning in Fixed Topology Networks," There is significant recent interest to parallelize deep learning algorithms +in order to handle the enormous growth in data and model sizes. While most +advances focus on model parallelization and engaging multiple computing agents +via using a central parameter server, aspect of data parallelization along with +decentralized computation has not been explored sufficiently. In this context, +this paper presents a new consensus-based distributed SGD (CDSGD) (and its +momentum variant, CDMSGD) algorithm for collaborative deep learning over fixed +topology networks that enables data parallelization as well as decentralized +computation. Such a framework can be extremely useful for learning agents with +access to only local/private data in a communication constrained environment. +We analyze the convergence properties of the proposed algorithm with strongly +convex and nonconvex objective functions with fixed and diminishing step sizes +using concepts of Lyapunov function construction. We demonstrate the efficacy +of our algorithms in comparison with the baseline centralized SGD and the +recently proposed federated averaging algorithm (that also enables data +parallelism) based on benchmark datasets such as MNIST, CIFAR-10 and CIFAR-100. +" +Learning Hidden Markov Models from Pairwise Co-occurrences with Application to Topic Modeling," We present a new algorithm for identifying the transition and emission +probabilities of a hidden Markov model (HMM) from the emitted data. +Expectation-maximization becomes computationally prohibitive for long +observation records, which are often required for identification. The new +algorithm is particularly suitable for cases where the available sample size is +large enough to accurately estimate second-order output probabilities, but not +higher-order ones. We show that if one is only able to obtain a reliable +estimate of the pairwise co-occurrence probabilities of the emissions, it is +still possible to uniquely identify the HMM if the emission probability is +\emph{sufficiently scattered}. We apply our method to hidden topic Markov +modeling, and demonstrate that we can learn topics with higher quality if +documents are modeled as observations of HMMs sharing the same emission (topic) +probability, compared to the simple but widely used bag-of-words model. +" +An explicit formula for Szego kernels on the Heisenberg group," In this paper, we give an explicit formula for the Szego kernel for $(0, q)$ +forms on the Heisenberg group $H_{n+1}$. +" +Necessary and sufficient conditions for consistent root reconstruction in Markov models on trees," We establish necessary and sufficient conditions for consistent root +reconstruction in continuous-time Markov models with countable state space on +bounded-height trees. Here a root state estimator is said to be consistent if +the probability that it returns to the true root state converges to 1 as the +number of leaves tends to infinity. We also derive quantitative bounds on the +error of reconstruction. Our results answer a question of Gascuel and Steel and +have implications for ancestral sequence reconstruction in a classical +evolutionary model of nucleotide insertion and deletion. +" +Perfectly Controllable Multi-Agent Networks," This note investigates how to design topology structures to ensure the +controllability of multi-agent networks (MASs) under any selection of leaders. +We put forward a concept of perfect controllability, which means that a +multi-agent system is controllable with no matter how the leaders are chosen. +In this situation, both the number and the locations of leader agents are +arbitrary. A necessary and sufficient condition is derived for the perfect +controllability. Moreover, a step-by-step design procedure is proposed by which +topologies are constructed and are proved to be perfectly controllable. The +principle of the proposed design method is interpreted by schematic diagrams +along with the corresponding topology structures from simple to complex. We +show that the results are valid for any number and any location of leaders. +Both the construction process and the corresponding topology structures are +clearly outlined. +" +Origin of the fundamental plane of elliptical galaxies in the Coma Cluster without fine-tuning," After thirty years of the discovery of the fundamental plane, explanations to +the tilt of the fundamental plane with respect to the virial plane still suffer +from the need of fine-tuning. In this paper, we try to explore the origin of +this tilt from the perspective of modified Newtonian dynamics (MOND) by +applying the 16 Coma galaxies available in Thomas et al.[1]. Based on the mass +models that can reproduce de Vaucouleurs' law closely, we find that the tilt of +the traditional fundamental plane is naturally explained by the simple form of +the MONDian interpolating function, if we assume a well motivated choice of +anisotropic velocity distribution, and adopt the Kroupa or Salpeter stellar +mass-to-light ratio. Our analysis does not necessarily rule out a varying +stellar mass-to-light ratio. +" +Dopants Promoting Ferroelectricity in Hafnia: Insights From A Comprehensive Chemical Space Exploration," Although dopants have been extensively employed to promote ferroelectricity +in hafnia films, their role in stabilizing the responsible ferroelectric +non-equilibrium Pca21 phase is not well understood. In this work, using first +principles computations, we investigate the influence of nearly 40 dopants on +the phase stability in bulk hafnia to identify dopants that can favor formation +of the polar Pca21 phase. Although no dopant was found to stabilize this polar +phase as the ground state, suggesting that dopants alone cannot induce +ferroelectricity in hafnia, Ca, Sr, Ba, La, Y and Gd were found to +significantly lower the energy of the polar phase with respect to the +equilibrium monoclinic phase. These results are consistent with the empirical +measurements of large remnant polarization in hafnia films doped with these +elements. Additionally, clear chemical trends of dopants with larger ionic +radii and lower electronegativity favoring the polar Pca21 phase in hafnia were +identified. For this polar phase, an additional bond between the dopant cation +and the 2nd nearest oxygen neighbor was identified as the root-cause of these +trends. Further, trivalent dopants (Y, La, and Gd) were revealed to stabilize +the polar Pca21 phase at lower strains when compared to divalent dopants (Sr +and Ba). Based on these insights, we predict that the lanthanide series metals, +the lower half of alkaline earth metals (Ca, Sr and Ba) and Y as the most +suitable dopants to promote ferroelectricity in hafnia. +" +Wave propagation with irregular dissipation and applications to acoustic problems and shallow waters," In this paper we consider an acoustic problem of wave propagation through a +discontinuous medium. The problem is reduced to the dissipative wave equation +with distributional dissipation. We show that this problem has a so-called very +weak solution, we analyse its properties and illustrate the theoretical results +through some numerical simulations by approximating the solutions to the full +dissipative model for a particular synthetic piecewise continuous medium. In +particular, we discover numerically a very interesting phenomenon of the +appearance of a new wave at the singular point. For the acoustic problem this +can be interpreted as an echo effect at the discontinuity interface of the +medium. +" +Superconductivity vs quantum criticality: effects of thermal fluctuations," We study the interplay between superconductivity and non-Fermi liquid +behavior of a Fermi surface coupled to a massless $SU(N)$ matrix boson near the +quantum critical point. The presence of thermal infrared singularities in both +the fermionic self-energy and the gap equation invalidates the Eliashberg +approximation, and makes the quantum-critical pairing problem qualitatively +different from that at zero temperature. Taking the large $N$ limit, we solve +the gap equation beyond the Eliashberg approximation, and obtain the +superconducting temperature $T_c$ as a function of $N$. Our results show an +anomalous scaling between the zero-temperature gap and $T_c$. For $N$ greater +than a critical value, we find that $T_c$ vanishes with a +Berezinskii-Kosterlitz-Thouless scaling behavior, and the system retains +non-Fermi liquid behavior down to zero temperature. This confirms and extends +previous renormalization-group analyses done at $T=0$, and provides a +controlled example of a naked quantum critical point. We discuss the crucial +role of thermal fluctuations in relating our results with earlier work where +superconductivity always develops due to the special role of the first +Matsubara frequency. +" +Ensemble Framework for Real-time Decision Making," This paper introduces a new framework for real-time decision making in video +games. An Ensemble agent is a compound agent composed of multiple agents, each +with its own tasks or goals to achieve. Usually when dealing with real-time +decision making, reactive agents are used; that is agents that return a +decision based on the current state. While reactive agents are very fast, most +games require more than just a rule-based agent to achieve good results. +Deliberative agents---agents that use a forward model to search future +states---are very useful in games with no hard time limit, such as Go or +Backgammon, but generally take too long for real-time games. The Ensemble +framework addresses this issue by allowing the agent to be both deliberative +and reactive at the same time. This is achieved by breaking up the game-play +into logical roles and having highly focused components for each role, with +each component disregarding anything outwith its own role. Reactive agents can +be used where a reactive agent is suited to the role, and where a deliberative +approach is required, branching is kept to a minimum by the removal of all +extraneous factors, enabling an informed decision to be made within a much +smaller time-frame. An Arbiter is used to combine the component results, +allowing high performing agents to be created from simple, efficient +components. +" +Reliability and Market Price of Energy in the Presence of Intermittent and Non-Dispatchable Renewable Energies," The intermittent nature of the renewable energies increases the operation +costs of conventional generators. As the share of energy supplied by renewable +sources increases, these costs also increase. In this paper, we quantify these +costs by developing a market clearing price of energy in the presence of +renewable energy and congestion constraints. We consider an electricity market +where generators propose their asking price per unit of energy to an +independent system operator (ISO). The ISO solve an optimization problem to +dispatch energy from each generator to minimize the total cost of energy +purchased on behalf of the consumers. +To ensure that the generators are able to meet the load within a desired +confidence level, we incorporate the notion of load variance using the +Conditional Value-at-Risk (CVAR) measure in an electricity market and we derive +the amount of committed power and market clearing price of energy as a function +of CVAR. It is shown that a higher penetration of renewable energies may +increase the committed power, market clearing price of energy and consumer cost +of energy due to renewable generation uncertainties. We also obtain an +upper-bound on the amount that congestion constraints can affect the committed +power. We present descriptive simulations to illustrate the impact of renewable +energy penetration and reliability levels on committed power by the +non-renewable generators, difference between the dispatched and committed +power, market price of energy and profit of renewable and non-renewable +generators. +" +Does Neural Machine Translation Benefit from Larger Context?," We propose a neural machine translation architecture that models the +surrounding text in addition to the source sentence. These models lead to +better performance, both in terms of general translation quality and pronoun +prediction, when trained on small corpora, although this improvement largely +disappears when trained with a larger corpus. We also discover that +attention-based neural machine translation is well suited for pronoun +prediction and compares favorably with other approaches that were specifically +designed for this task. +" +Heat asymptotics for nonminimal Laplace type operators and application to noncommutative tori," Let $P$ be a Laplace type operator acting on a smooth hermitean vector bundle +$V$ of fiber $\mathbb{C}^N$ over a compact Riemannian manifold given locally by +$P= - [g^{\mu\nu} u(x)\partial_\mu\partial_\nu + v^\nu(x)\partial_\nu + w(x)]$ +where $u,\,v^\nu,\,w$ are $M_N(\mathbb{C})$-valued functions with $u(x)$ +positive and invertible. For any $a \in \Gamma(\text{End}(V))$, we consider the +asymptotics $\text{Tr} (a e^{-tP}) \underset{t \downarrow 0^+}{\sim} +\,\sum_{r=0}^\infty a_r(a, P)\,t^{(r-d)/2}$ where the coefficients $a_r(a, P)$ +can be written locally as $a_r(a, P)(x) = \text{tr}[a(x) \mathcal{R}_r(x)]$. +The computation of $\mathcal{R}_2$ is performed opening the opportunity to +calculate the modular scalar curvature for noncommutative tori. +" +DLVM: A modern compiler infrastructure for deep learning systems," Deep learning software demands reliability and performance. However, many of +the existing deep learning frameworks are software libraries that act as an +unsafe DSL in Python and a computation graph interpreter. We present DLVM, a +design and implementation of a compiler infrastructure with a linear algebra +intermediate representation, algorithmic differentiation by adjoint code +generation, domain-specific optimizations and a code generator targeting GPU +via LLVM. Designed as a modern compiler infrastructure inspired by LLVM, DLVM +is more modular and more generic than existing deep learning compiler +frameworks, and supports tensor DSLs with high expressivity. With our +prototypical staged DSL embedded in Swift, we argue that the DLVM system +enables a form of modular, safe and performant frameworks for deep learning. +" +k-Space Deep Learning for Parallel MRI: Application to Time-Resolved MR Angiography," Time-resolved angiography with interleaved stochastic trajectories (TWIST) +has been widely used for dynamic contrast enhanced MRI (DCE-MRI). To achieve +highly accelerated acquisitions, TWIST combines the periphery of the k-space +data from several adjacent frames to reconstruct one temporal frame. However, +this view-sharing scheme limits the true temporal resolution of TWIST. +Moreover, the k-space sampling patterns have been specially designed for a +specific generalized autocalibrating partial parallel acquisition (GRAPPA) +factor so that it is not possible to reduce the number of view-sharing once the +k-data is acquired. To address these issues, this paper proposes a novel +k-space deep learning approach for parallel MRI. In particular, we have +designed our neural network so that accurate k-space interpolations are +performed simultaneously for multiple coils by exploiting the redundancies +along the coils and images. Reconstruction results using in vivo TWIST data set +confirm that the proposed method can immediately generate high-quality +reconstruction results with various choices of view- sharing, allowing us to +exploit the trade-off between spatial and temporal resolution in time-resolved +MR angiography. +" +The Effect of Collective Attention on Controversial Debates on Social Media," We study the evolution of long-lived controversial debates as manifested on +Twitter from 2011 to 2016. Specifically, we explore how the structure of +interactions and content of discussion varies with the level of collective +attention, as evidenced by the number of users discussing a topic. Spikes in +the volume of users typically correspond to external events that increase the +public attention on the topic -- as, for instance, discussions about `gun +control' often erupt after a mass shooting. +This work is the first to study the dynamic evolution of polarized online +debates at such scale. By employing a wide array of network and content +analysis measures, we find consistent evidence that increased collective +attention is associated with increased network polarization and network +concentration within each side of the debate; and overall more uniform lexicon +usage across all users. +" +Between-Ride Routing for Private Transportation Services," Spurred by the growth of transportation network companies and increasing data +capabilities, vehicle routing and ride-matching algorithms can improve the +efficiency of private transportation services. However, existing routing +solutions do not address where drivers should travel after dropping off a +passenger and before receiving the next passenger ride request, i.e., during +the between-ride period. We address this problem by developing an efficient +algorithm to find the optimal policy for drivers between rides in order to +maximize driver profits. We model the road network as a graph, and we show that +the between-ride routing problem is equivalent to a stochastic shortest path +problem, an infinite dynamic program with no discounting. We prove under +reasonable assumptions that an optimal routing policy exists that avoids +cycles; policies of this type can be efficiently found. We present an iterative +approach to find an optimal routing policy. Our approach can account for +various factors, including the frequency of passenger ride requests at +different locations, traffic conditions, and surge pricing. We demonstrate the +effectiveness of the approach by implementing it on road network data from +Boston and New York City. +" +Effective Spoken Language Labeling with Deep Recurrent Neural Networks," Understanding spoken language is a highly complex problem, which can be +decomposed into several simpler tasks. In this paper, we focus on Spoken +Language Understanding (SLU), the module of spoken dialog systems responsible +for extracting a semantic interpretation from the user utterance. The task is +treated as a labeling problem. In the past, SLU has been performed with a wide +variety of probabilistic models. The rise of neural networks, in the last +couple of years, has opened new interesting research directions in this domain. +Recurrent Neural Networks (RNNs) in particular are able not only to represent +several pieces of information as embeddings but also, thanks to their recurrent +architecture, to encode as embeddings relatively long contexts. Such long +contexts are in general out of reach for models previously used for SLU. In +this paper we propose novel RNNs architectures for SLU which outperform +previous ones. Starting from a published idea as base block, we design new deep +RNNs achieving state-of-the-art results on two widely used corpora for SLU: +ATIS (Air Traveling Information System), in English, and MEDIA (Hotel +information and reservation in France), in French. +" +Trust-based Multi-Robot Symbolic Motion Planning with a Human-in-the-Loop," Symbolic motion planning for robots is the process of specifying and planning +robot tasks in a discrete space, then carrying them out in a continuous space +in a manner that preserves the discrete-level task specifications. Despite +progress in symbolic motion planning, many challenges remain, including +addressing scalability for multi-robot systems and improving solutions by +incorporating human intelligence. In this paper, distributed symbolic motion +planning for multi-robot systems is developed to address scalability. More +specifically, compositional reasoning approaches are developed to decompose the +global planning problem, and atomic propositions for observation, +communication, and control are proposed to address inter-robot collision +avoidance. To improve solution quality and adaptability, a dynamic, +quantitative, and probabilistic human-to-robot trust model is developed to aid +this decomposition. Furthermore, a trust-based real-time switching framework is +proposed to switch between autonomous and manual motion planning for tradeoffs +between task safety and efficiency. Deadlock- and livelock-free algorithms are +designed to guarantee reachability of goals with a human-in-the-loop. A set of +non-trivial multi-robot simulations with direct human input and trust +evaluation are provided demonstrating the successful implementation of the +trust-based multi-robot symbolic motion planning methods. +" +Investigation of Beam Emittance and Beam Transport Line Optics on Polarization," Effects of beam emittance, energy spread, optical parameters and magnet +misalignment on beam polarization through particle transport systems are +investigated. Particular emphasis will be placed on the beam lines being used +at Fermilab for the development of the muon beam for the Muon g-2 experiment, +including comparisons with the natural polarization resulting from pion decay, +and comments on the development of systematic correlations among phase space +variables. +" +Metrics for Bengali Text Entry Research," With the intention of bringing uniformity to Bengali text entry research, +here we present a new approach for calculating the most popular English text +entry evaluation metrics for Bengali. To demonstrate our approach, we conducted +a user study where we evaluated four popular Bengali text entry techniques. +" +Annexing magic and tune-out wavelengths to the clock transitions of the alkaline-earth metal ions," We present additional magic wavelengths ($\lambda_{\rm{magic}}$) for the +clock transitions in the alkaline-earth metal ions considering circular +polarized light aside from our previously reported values in [J. Kaur et al., +Phys. Rev. A {\bf 92}, 031402(R) (2015)] for the linearly polarized light. +Contributions from the vector component to the dynamic dipole polarizabilities +($\alpha_d(\omega)$) of the atomic states associated with the clock transitions +play major roles in the evaluation of these $\lambda_{\rm{magic}}$, hence +facilitating in choosing circular polarization of lasers in the experiments. +Moreover, the actual clock transitions in these ions are carried out among the +hyperfine levels. The $\lambda_{\rm{magic}}$ values in these hyperfine +transitions are estimated and found to be different from $\lambda_{\rm{magic}}$ +for the atomic transitions due to different contributions coming from the +vector and tensor part of $\alpha_d(\omega)$. Importantly, we also present +$\lambda_{\rm{magic}}$ values that depend only on the scalar component of +$\alpha_d(\omega)$ for their uses in a specially designed trap geometry for +these ions so that they can be used unambiguously among any hyperfine levels of +the atomic states of the clock transitions. We also present $\alpha_d(\omega)$ +values explicitly at the 1064 nm for the atomic states associated with the +clock transitions which may be useful for creating ""high-field seeking"" traps +for the above ions using the Nd:YAG laser. The tune out wavelengths at which +the states would be free from the Stark shifts are also presented. Accurate +values of the electric dipole matrix elements required for these studies are +given and trends of electron correlation effects in determining them are also +highlighted. +" +Graph Convolutional Neural Networks for Web-Scale Recommender Systems," Recent advancements in deep neural networks for graph-structured data have +led to state-of-the-art performance on recommender system benchmarks. However, +making these methods practical and scalable to web-scale recommendation tasks +with billions of items and hundreds of millions of users remains a challenge. +Here we describe a large-scale deep recommendation engine that we developed and +deployed at Pinterest. We develop a data-efficient Graph Convolutional Network +(GCN) algorithm PinSage, which combines efficient random walks and graph +convolutions to generate embeddings of nodes (i.e., items) that incorporate +both graph structure as well as node feature information. Compared to prior GCN +approaches, we develop a novel method based on highly efficient random walks to +structure the convolutions and design a novel training strategy that relies on +harder-and-harder training examples to improve robustness and convergence of +the model. We also develop an efficient MapReduce model inference algorithm to +generate embeddings using a trained model. We deploy PinSage at Pinterest and +train it on 7.5 billion examples on a graph with 3 billion nodes representing +pins and boards, and 18 billion edges. According to offline metrics, user +studies and A/B tests, PinSage generates higher-quality recommendations than +comparable deep learning and graph-based alternatives. To our knowledge, this +is the largest application of deep graph embeddings to date and paves the way +for a new generation of web-scale recommender systems based on graph +convolutional architectures. +" +Suggestive Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation," Image segmentation is a fundamental problem in biomedical image analysis. +Recent advances in deep learning have achieved promising results on many +biomedical image segmentation benchmarks. However, due to large variations in +biomedical images (different modalities, image settings, objects, noise, etc), +to utilize deep learning on a new application, it usually needs a new set of +training data. This can incur a great deal of annotation effort and cost, +because only biomedical experts can annotate effectively, and often there are +too many instances in images (e.g., cells) to annotate. In this paper, we aim +to address the following question: With limited effort (e.g., time) for +annotation, what instances should be annotated in order to attain the best +performance? We present a deep active learning framework that combines fully +convolutional network (FCN) and active learning to significantly reduce +annotation effort by making judicious suggestions on the most effective +annotation areas. We utilize uncertainty and similarity information provided by +FCN and formulate a generalized version of the maximum set cover problem to +determine the most representative and uncertain areas for annotation. Extensive +experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node +ultrasound image segmentation dataset show that, using annotation suggestions +by our method, state-of-the-art segmentation performance can be achieved by +using only 50% of training data. +" +Estimation for high-frequency data under parametric market microstructure noise," In this paper, we propose a general class of noise-robust estimators based on +the existing estimators in the non-noisy high-frequency data literature. The +market microstructure noise is a known parametric function of the limit order +book. The noise-robust estimators are constructed as a plug-in version of their +counterparts, where we replace the efficient price, which is non-observable in +our framework, by an estimator based on the raw price and the limit order book +data. We show that the technology can be directly applied to estimate +volatility, high-frequency covariance, functionals of volatility and volatility +of volatility in a general nonparametric framework where, depending on the +problem at hand, price possibly includes infinite jump activity and sampling +times encompass asynchronicity and endogeneity. +" +Probably approximate Bayesian computation: nonasymptotic convergence of ABC under misspecification," Approximate Bayesian computation (ABC) is a widely used inference method in +Bayesian statistics to bypass the point-wise computation of the likelihood. In +this paper we develop theoretical bounds for the distance between the +statistics used in ABC. We show that some versions of ABC are inherently robust +to misspecification. The bounds are given in the form of oracle inequalities +for a finite sample size. The dependence on the dimension of the parameter +space and the number of statistics is made explicit. The results are shown to +be amenable to oracle inequalities in parameter space. We apply our theoretical +results to given prior distributions and data generating processes, including a +non-parametric regression model. In a second part of the paper, we propose a +sequential Monte Carlo (SMC) to sample from the pseudo-posterior, improving +upon the state of the art samplers. +" +Generation of soliton bubbles in a sine-Gordon system with localised inhomogeneities," Nonlinear wave propagation plays a crucial role in the functioning of many +physical and biophysical systems. In the propagation regime, disturbances due +to the presence of local external perturbations, such as localised defects or +boundary interphase walls have gained great attention. In this article, the +complex phenomena that occur when sine-Gordon line solitons collide with +localised inhomogeneities are investigated. By a one-dimensional theory, it is +shown that internal modes of two-dimensional sine-Gordon solitons can be +activated depending on the topological properties of the inhomogeneities. Shape +mode instabilities cause the formation of bubble-like and drop-like structures +for both stationary and travelling line solitons. It is shown that such +structures are formed and stabilised by arrays of localised inhomogeneities +distributed in space. Implications of the observed phenomena in physical and +biological systems are discussed. +" +Development and Test of a uTPC Cluster Reconstruction for a Triple GEM Detector in Strong Magnetic Field," Performance of triple GEM prototypes has been evaluated by means of a muon +beam at the H4 line of the SPS test area at CERN. The data from two planar +prototypes have been reconstructed and analyzed offline with two clusterization +methods: the enter of gravity of the charge distribution and the micro Time +Projection Chamber (\muTPC). Concerning the spatial resolution, the charge +centroid cluster reconstruction performs extremely well with no magnetic field: +the resolution is well below 100 \mum . Increasing the magnetic field +intensity, the resolution degrades almost linearly as effect of the Lorentz +force that displaces, broadens and asymmetrizes the electron avalanche. Tuning +the electric fields of the GEM prototype we could achieve the unprecedented +spatial resolution of 190 \mum at 1 Tesla. In order to boost the spatial +resolution with strong magnetic field and inclined tracks a \muTPC cluster +reconstruction has been investigated. Such a readout mode exploits the good +time resolution of the GEM detector and electronics to reconstruct the +trajectory of the particle inside the conversion gap. Beside the improvement of +the spatial resolution, information on the track angle can be also extracted. +The new clustering algorithm has been tested with diagonal tracks with no +magnetic field showing a resolution between 100 um and 150 um for the incident +angle ranging from 10° to 45° . Studies show similar performance with +1 Tesla magnetic field. This is the first use of a \muTPC readout with a triple +GEM detector in magnetic field. This study has shown that a combined readout is +capable to guarantee stable performance over a broad spectrum of particle +momenta and incident angles, up to a 1 Tesla magnetic field. +" +"An update to the EVEREST K2 pipeline: Short cadence, saturated stars, and Kepler-like photometry down to Kp = 15"," We present an update to the EVEREST K2 pipeline that addresses various +limitations in the previous version and improves the photometric precision of +the de-trended light curves. We develop a fast regularization scheme for third +order pixel level decorrelation (PLD) and adapt the algorithm to include the +PLD vectors of neighboring stars to enhance the predictive power of the model +and minimize overfitting, particularly for faint stars. We also modify PLD to +work for saturated stars and improve its performance on extremely variable +stars. On average, EVEREST 2.0 light curves have 10-20% higher photometric +precision than those in the previous version, yielding the highest precision +light curves at all Kp magnitudes of any publicly available K2 catalog. For +most K2 campaigns, we recover the original Kepler precision to at least Kp = +14, and to at least Kp = 15 for campaigns 1, 5, and 6. We also de-trend all +short cadence targets observed by K2, obtaining even higher photometric +precision for these stars. All light curves for campaigns 0-8 are available +online in the EVEREST catalog, which will be continuously updated with future +campaigns. EVEREST 2.0 is open source and is coded in a general framework that +can be applied to other photometric surveys, including Kepler and the upcoming +TESS mission. +" +Induction of tin pest for cleaning tin-drop contaminated optics," Tin pest, the allotropic {\beta} - {\alpha} phase transformation of tin, was +examined for use in cleaning of tin-contaminated optics. Induction of change in +material structure led to disintegration of tin samples into pieces and powder. +The transition times were studied for tin drops of different purity grades, +using inoculation with {\alpha}-Sn seed particles, also after prior mechanical +deformation and surface oxide removal. For tin of very high purity levels fast +nucleation within hours and full transformation within a day could be achieved +during cooling at -24 °C, resulting in strong embrittlement of the +material. Tin dripped onto samples of multilayer-coated optics as used in +extreme ultraviolet lithography machines was made cleanable by phase transition +after inoculation and cooling. The reflectance of multilayer-coated mirrors was +found to decrease by no more than 1% with this cleaning method. +" +Fukaya categories of plumbings and multiplicative preprojective algebras," Given an arbitrary graph $\Gamma$ and non-negative integers $g_v$ for each +vertex $v$ of $\Gamma$, let $X_\Gamma$ be the Weinstein $4$-manifold obtained +by plumbing copies of $T^*\Sigma_v$ according to this graph, where $\Sigma_v$ +is a surface of genus $g_v$. We compute the wrapped Fukaya category of +$X_\Gamma$ (with bulk parameters) using Legendrian surgery extending our +previous work arXiv:1502.07922 where it was assumed that $g_v=0$ for all $v$ +and $\Gamma$ was a tree. The resulting algebra is recognized as the (derived) +multiplicative preprojective algebra (and its higher genus version) defined by +Crawley-Boevey and Shaw arXiv:math/0404186. Along the way, we find a smaller +model for the internal DG-algebra of Ekholm-Ng arXiv:1307.8436 associated to +$1$-handles in the Legendrian surgery presentation of Weinstein $4$-manifolds +which might be of independent interest. +" +Ten Simple Rules When Considering Retirement," This is an article submitted to the Ten Simple Rules series of professional +development articles published by PLOS Computational Biology. +" +Secure Classification With Augmented Features," With the evolution of data collection ways, it is possible to produce +abundant data described by multiple feature sets. Previous studies show that +including more features does not necessarily bring positive effect. How to +prevent the augmented features worsening classification performance is crucial +but rarely studied. In this paper, we study this challenging problem by +proposing a secure classification approach, whose accuracy is never degenerated +when exploiting augmented features. We propose two ways to achieve the security +of our method named as SEcure Classification (SEC). Firstly, to leverage +augmented features, we learn various types of classifiers and adapt them by +employing a specially designed robust loss. It provides various candidate +classifiers to meet the following assumption of security operation. Secondly, +we integrate all candidate classifiers by approximately maximizing the +performance improvement. Under a mild assumption, the integrated classifier has +theoretical security guarantee. Several new optimization methods have been +developed to accommodate the problems with proved convergence. Besides +evaluating SEC on 16 data sets, we also apply SEC in the application of +diagnostic classification of schizophrenia since it has vast application +potentiality. Experimental results demonstrate the effectiveness of SEC in both +tackling security problem and discriminating schizophrenic patients from +healthy controls. +" +Deceiving Google's Cloud Video Intelligence API Built for Summarizing Videos," Despite the rapid progress of the techniques for image classification, video +annotation has remained a challenging task. Automated video annotation would be +a breakthrough technology, enabling users to search within the videos. +Recently, Google introduced the Cloud Video Intelligence API for video +analysis. As per the website, the system can be used to ""separate signal from +noise, by retrieving relevant information at the video, shot or per frame"" +level. A demonstration website has been also launched, which allows anyone to +select a video for annotation. The API then detects the video labels (objects +within the video) as well as shot labels (description of the video events over +time). In this paper, we examine the usability of the Google's Cloud Video +Intelligence API in adversarial environments. In particular, we investigate +whether an adversary can subtly manipulate a video in such a way that the API +will return only the adversary-desired labels. For this, we select an image, +which is different from the video content, and insert it, periodically and at a +very low rate, into the video. We found that if we insert one image every two +seconds, the API is deceived into annotating the video as if it only contained +the inserted image. Note that the modification to the video is hardly +noticeable as, for instance, for a typical frame rate of 25, we insert only one +image per 50 video frames. We also found that, by inserting one image per +second, all the shot labels returned by the API are related to the inserted +image. We perform the experiments on the sample videos provided by the API +demonstration website and show that our attack is successful with different +videos and images. +" +Inherent Biases of Recurrent Neural Networks for Phonological Assimilation and Dissimilation," A recurrent neural network model of phonological pattern learning is +proposed. The model is a relatively simple neural network with one recurrent +layer, and displays biases in learning that mimic observed biases in human +learning. Single-feature patterns are learned faster than two-feature patterns, +and vowel or consonant-only patterns are learned faster than patterns involving +vowels and consonants, mimicking the results of laboratory learning +experiments. In non-recurrent models, capturing these biases requires the use +of alpha features or some other representation of repeated features, but with a +recurrent neural network, these elaborations are not necessary. +" +A graph-embedded deep feedforward network for disease outcome classification and feature selection using gene expression data," Gene expression data represents a unique challenge in predictive model +building, because of the small number of samples $(n)$ compared to the huge +amount of features $(p)$. This ""$n<0$ a constant, such +that $d(x,y)+d(y,z)-d(x,z) \ge c$, for any pairwise distinct points $x,y,z$ of +$M$. For such metric spaces we prove that they can be isometrically embedded +into any Banach space containing an isomorphic copy of $\ell_\infty$. +" +On stable transitivity of finitely generated groups of volume preserving diffeomorphisms," In this paper, we provide a new criterion for the stable transitivity of +volume preserving finite generated group on any compact Riemannian manifold. As +one of our applications, we generalised a result of Dolgopyat and Krikorian in +\cite{DK} and obtained stable transitivity for random rotations on the sphere +in any dimension. As another application, we showed that for $\infty \geq r +\geq 2$, any $C^r$ volume preserving partially hyperbolic diffeomorphism $g$ on +any compact Riemannian manifold $M$ having sufficiently Hölder stable or +unstable distribution, for any sufficiently large integer $K$, for any +$(f_i)_{i=1}^{K}$ in a $C^1$ open $C^r$ dense subset of +$\textnormal{\Diff}^r(M,m)^K$, the group generated by $g, f_1,\cdots, f_K$ acts +transitively. +" +Cellular Automata on Group Sets," We introduce and study cellular automata whose cell spaces are +left-homogeneous spaces. Examples of left-homogeneous spaces are spheres, +Euclidean spaces, as well as hyperbolic spaces acted on by isometries; uniform +tilings acted on by symmetries; vertex-transitive graphs, in particular, Cayley +graphs, acted on by automorphisms; groups acting on themselves by +multiplication; and integer lattices acted on by translations. For such +automata and spaces, we prove, in particular, generalisations of topological +and uniform variants of the Curtis-Hedlund-Lyndon theorem, of the +Tarski-F{\o}lner theorem, and of the Garden-of-Eden theorem on the full shift +and certain subshifts. Moreover, we introduce signal machines that can handle +accumulations of events and using such machines we present a time-optimal +quasi-solution of the firing mob synchronisation problem on finite and +connected graphs. +" +Scaling from gauge and scalar radiation in Abelian Higgs string networks," We investigate cosmic string networks in the Abelian Higgs model using data +from a campaign of large-scale numerical simulations on lattices of up to +$4096^3$ grid points. We observe scaling or self-similarity of the networks +over a wide range of scales, and estimate the asymptotic values of the mean +string separation in horizon length units $\dot{\xi}$ and of the mean square +string velocity $\bar v^2$ in the continuum and large time limits. The scaling +occurs because the strings lose energy into classical radiation of the scalar +and gauge fields of the Abelian Higgs model. We quantify the energy loss with a +dimensionless radiative efficiency parameter, and show that it does not vary +significantly with lattice spacing or string separation. This implies that the +radiative energy loss underlying the scaling behaviour is not a lattice +artefact, and justifies the extrapolation of measured network properties to +large times for computations of cosmological perturbations. We also show that +the core growth method, which increases the defect core width with time to +extend the dynamic range of simulations, does not introduce significant +systematic error. We compare $\dot{\xi}$ and $\bar v^2$ to values measured in +simulations using the Nambu-Goto approximation, finding that the latter +underestimate the mean string separation by about 25%, and overestimate $\bar +v^2$ by about 10%. The scaling of the string separation implies that string +loops decay by the emission of massive radiation within a Hubble time in field +theory simulations, in contrast to the Nambu-Goto scenario which neglects this +energy loss mechanism. String loops surviving for only one Hubble time emit +much less gravitational radiation than in the Nambu-Goto scenario, and are +consequently subject to much weaker gravitational wave constraints on their +tension. +" +An Almost Analytical Approach to Simulating 2D Electronic Spectra," We introduce an almost analytical method to simulate 2D electronic spectra as +a double Fourier transform of the the non-linear response function (NRF) +corresponding to a particular optical pulse sequence. We employ a unitary +transformation to represent the total system Hamiltonian in a stationary basis +that allows us to separate contributions from decoherence and phonon-mediated +population relaxation to the NRF. Previously, one of us demonstrated the use of +an analytic, cumulant expansion approach to calculate the decoherence term. +Here, we extend this idea to obtain an accurate expression for the population +relaxation term, a significant improvement over standard quantum master +equation-based approximations. We numerically demonstrate the accuracy of our +method by computing the photon echo spectrum of a two-level system coupled to a +thermal bath, and we highlight the mechanistic insights obtained from our +simulation. +" +"Binary classification models with ""Uncertain"" predictions"," Binary classification models which can assign probabilities to categories +such as ""the tissue is 75% likely to be tumorous"" or ""the chemical is 25% +likely to be toxic"" are well understood statistically, but their utility as an +input to decision making is less well explored. We argue that users need to +know which is the most probable outcome, how likely that is to be true and, in +addition, whether the model is capable enough to provide an answer. It is the +last case, where the potential outcomes of the model explicitly include ""don't +know"" that is addressed in this paper. Including this outcome would better +separate those predictions that can lead directly to a decision from those +where more data is needed. Where models produce an ""Uncertain"" answer similar +to a human reply of ""don't know"" or ""50:50"" in the examples we refer to +earlier, this would translate to actions such as ""operate on tumour"" or ""remove +compound from use"" where the models give a ""more true than not"" answer. Where +the models judge the result ""Uncertain"" the practical decision might be ""carry +out more detailed laboratory testing of compound"" or ""commission new tissue +analyses"". The paper presents several examples where we first analyse the +effect of its introduction, then present a methodology for separating +""Uncertain"" from binary predictions and finally, we provide arguments for its +use in practice. +" +SafePredict: A Meta-Algorithm for Machine Learning That Uses Refusals to Guarantee Correctness," SafePredict is a novel meta-algorithm that works with any base prediction +algorithm for online data to guarantee an arbitrarily chosen correctness rate, +$1-\epsilon$, by allowing refusals. Allowing refusals means that the +meta-algorithm may refuse to emit a prediction produced by the base algorithm +on occasion so that the error rate on non-refused predictions does not exceed +$\epsilon$. The SafePredict error bound does not rely on any assumptions on the +data distribution or the base predictor. When the base predictor happens not to +exceed the target error rate $\epsilon$, SafePredict refuses only a finite +number of times. When the error rate of the base predictor changes through time +SafePredict makes use of a weight-shifting heuristic that adapts to these +changes without knowing when the changes occur yet still maintains the +correctness guarantee. Empirical results show that (i) SafePredict compares +favorably with state-of-the art confidence based refusal mechanisms which fail +to offer robust error guarantees; and (ii) combining SafePredict with such +refusal mechanisms can in many cases further reduce the number of refusals. Our +software (currently in Python) is included in the supplementary material. +" +Automated Crowdturfing Attacks and Defenses in Online Review Systems," Malicious crowdsourcing forums are gaining traction as sources of spreading +misinformation online, but are limited by the costs of hiring and managing +human workers. In this paper, we identify a new class of attacks that leverage +deep learning language models (Recurrent Neural Networks or RNNs) to automate +the generation of fake online reviews for products and services. Not only are +these attacks cheap and therefore more scalable, but they can control rate of +content output to eliminate the signature burstiness that makes crowdsourced +campaigns easy to detect. +Using Yelp reviews as an example platform, we show how a two phased review +generation and customization attack can produce reviews that are +indistinguishable by state-of-the-art statistical detectors. We conduct a +survey-based user study to show these reviews not only evade human detection, +but also score high on ""usefulness"" metrics by users. Finally, we develop novel +automated defenses against these attacks, by leveraging the lossy +transformation introduced by the RNN training and generation cycle. We consider +countermeasures against our mechanisms, show that they produce unattractive +cost-benefit tradeoffs for attackers, and that they can be further curtailed by +simple constraints imposed by online service providers. +" +Manipulating charge-density-wave in monolayer $1T$-TiSe$_2$ by strain and charge doping: A first-principles investigation," We investigate the effects of the in-plane biaxial strain and charge doping +on the charge density wave (CDW) order of monolayer $1T$-TiSe$_2$ by using the +first-principles calculations. Our results show that the tensile strain can +significantly enhance the CDW order, while both compressive strain and charge +doping (electrons and holes) suppress the CDW instability. The tensile strain +may provide an effective method for obtaining higher CDW transition temperature +on the basis of monolayer $1T$-TiSe$_2$. We also discuss the potential +superconductivity in charge-doped monolayer $1T$-TiSe$_2$. Controllable +electronic phase transition from CDW state to metallic state or even +superconducting state can be realized in monolayer $1T$-TiSe$_2$, which makes +$1T$-TiSe$_2$ possess a promising application in controllable switching +electronic devices based on CDW. +" +Automatic Program Synthesis of Long Programs with a Learned Garbage Collector," We consider the problem of generating automatic code given sample +input-output pairs. We train a neural network to map from the current state and +the outputs to the program's next statement. The neural network optimizes +multiple tasks concurrently: the next operation out of a set of high level +commands, the operands of the next statement, and which variables can be +dropped from memory. Using our method we are able to create programs that are +more than twice as long as existing state-of-the-art solutions, while improving +the success rate for comparable lengths, and cutting the run-time by two orders +of magnitude. Our code, including an implementation of various literature +baselines, is publicly available at this https URL +" +ExoMol molecular line lists - XXIII: spectra of PO and PS," Comprehensive line lists for phosphorus monoxide ($^{31}$P$^{16}$O) and +phosphorus monosulphide ($^{31}$P$^{32}$S) in their $X$ $^2\Pi$ electronic +ground state are presented. The line lists are based on new ab initio potential +energy (PEC), spin-orbit (SOC) and dipole moment (DMC) curves computed using +the MRCI+Q-r method with aug-cc-pwCV5Z and aug-cc-pV5Z basis sets. The nuclear +motion equations (i.e. the rovibronic Schrödinger equations for each +molecule) are solved using the program Duo. The PECs and SOCs are refined in +least-squares fits to available experimental data. Partition functions, $Q(T)$, +are computed up to $T=$ 5000 K, the range of validity of the line lists. These +line lists are the most comprehensive available for either molecule. The +characteristically sharp peak of the $Q$-branches from the spin-orbit split +components give useful diagnostics for both PO and PS in spectra at infrared +wavelengths. These line lists should prove useful for analysing observations +and setting up models of environments such as brown dwarfs, low-mass stars, +O-rich circumstellar regions and potentially for exoplanetary retrievals. Since +PS is yet to be detected in space, the role of the two lowest excited +electronic states ($a$ $^4\Pi$ and $B$ $^2\Pi$) are also considered. An +approximate line list for the PS $X - B$ electronic transition, which predicts +a number of sharp vibrational bands in the near ultraviolet, is also presented. +he line lists are available from the CDS (cdsarc.u-strasbg.fr) and ExoMol +(www.exomol.com) databases. +" +Statistical Anomaly Detection via Composite Hypothesis Testing for Markov Models," Under Markovian assumptions, we leverage a Central Limit Theorem (CLT) for +the empirical measure in the test statistic of the composite hypothesis +Hoeffding test so as to establish weak convergence results for the test +statistic, and, thereby, derive a new estimator for the threshold needed by the +test. We first show the advantages of our estimator over an existing estimator +by conducting extensive numerical experiments. We find that our estimator +controls better for false alarms while maintaining satisfactory detection +probabilities. We then apply the Hoeffding test with our threshold estimator to +detecting anomalies in two distinct applications domains: one in communication +networks and the other in transportation networks. The former application seeks +to enhance cyber security and the latter aims at building smarter +transportation systems in cities. +" +A Convex Similarity Index for Sparse Recovery of Missing Image Samples," This paper investigates the problem of recovering missing samples using +methods based on sparse representation adapted especially for image signals. +Instead of $l_2$-norm or Mean Square Error (MSE), a new perceptual quality +measure is used as the similarity criterion between the original and the +reconstructed images. The proposed criterion called Convex SIMilarity (CSIM) +index is a modified version of the Structural SIMilarity (SSIM) index, which +despite its predecessor, is convex and uni-modal. We derive mathematical +properties for the proposed index and show how to optimally choose the +parameters of the proposed criterion, investigating the Restricted Isometry +(RIP) and error-sensitivity properties. We also propose an iterative sparse +recovery method based on a constrained $l_1$-norm minimization problem, +incorporating CSIM as the fidelity criterion. The resulting convex optimization +problem is solved via an algorithm based on Alternating Direction Method of +Multipliers (ADMM). Taking advantage of the convexity of the CSIM index, we +also prove the convergence of the algorithm to the globally optimal solution of +the proposed optimization problem, starting from any arbitrary point. +Simulation results confirm the performance of the new similarity index as well +as the proposed algorithm for missing sample recovery of image patch signals. +" +Image Subtraction Reduction of Open Clusters M35 & NGC 2158 In The K2 Campaign-0 Super-Stamp," Observations were made of the open clusters M35 and NGC 2158 during the +initial K2 campaign (C0). Reducing these data to high-precision photometric +time-series is challenging due to the wide point spread function (PSF) and the +blending of stellar light in such dense regions. We developed an +image-subtraction-based K2 reduction pipeline that is applicable to both +crowded and sparse stellar fields. We applied our pipeline to the data-rich C0 +K2 super-stamp, containing the two open clusters, as well as to the neighboring +postage stamps. In this paper, we present our image subtraction reduction +pipeline and demonstrate that this technique achieves ultra-high photometric +precision for sources in the C0 super-stamp. We extract the raw light curves of +3960 stars taken from the UCAC4 and EPIC catalogs and de-trend them for +systematic effects. We compare our photometric results with the prior +reductions published in the literature. For detrended, TFA-corrected sources in +the 12--12.25 $\rm K_{p}$ magnitude range, we achieve a best 6.5 hour window +running rms of 35 ppm falling to 100 ppm for fainter stars in the 14--14.25 $ +\rm K_{p}$ magnitude range. For stars with $\rm K_{p}> 14$, our detrended and +6.5 hour binned light curves achieve the highest photometric precision. +Moreover, all our TFA-corrected sources have higher precision on all time +scales investigated. This work represents the first published image subtraction +analysis of a K2 super-stamp. This method will be particularly useful for +analyzing the Galactic bulge observations carried out during K2 campaign 9. The +raw light curves and the final results of our detrending processes are publicly +available at \url{this http URL}. +" +Fast random field generation with $H$-matrices," We use the $H$-matrix technology to compute the approximate square root of a +covariance matrix in linear cost. This allows us to generate normal and +log-normal random fields on general point sets with optimal cost. We derive +rigorous error estimates which show convergence of the method. Our approach +requires only mild assumptions on the covariance function and on the point set. +Therefore, it might be also a nice alternative to the circulant embedding +approach which applies only to regular grids and stationary covariance +functions. +" +Random Feature Stein Discrepancies," Computable Stein discrepancies have been deployed for a variety of +applications, ranging from sampler selection in posterior inference to +approximate Bayesian inference to goodness-of-fit testing. Existing +convergence-determining Stein discrepancies admit strong theoretical guarantees +but suffer from a computational cost that grows quadratically in the sample +size. While linear-time Stein discrepancies have been proposed for +goodness-of-fit testing, they exhibit avoidable degradations in testing +power---even when power is explicitly optimized. To address these shortcomings, +we introduce feature Stein discrepancies ($\Phi$SDs), a new family of quality +measures that can be cheaply approximated using importance sampling. We show +how to construct $\Phi$SDs that provably determine the convergence of a sample +to its target and develop high-accuracy approximations---random $\Phi$SDs +(R$\Phi$SDs)---which are computable in near-linear time. In our experiments +with sampler selection for approximate posterior inference and goodness-of-fit +testing, R$\Phi$SDs perform as well or better than quadratic-time KSDs while +being orders of magnitude faster to compute. +" +3x3 Singular Matrices of Linear Forms," We determine the irreducible components of the space of 3x3 matrices of +linear forms with vanishing determinant. We show that there are four +irreducible components and we identify them concretely. In particular, under +elementary row and column operations with constant coefficients, a 3x3 matrix +with vanishing determinant is equivalent to one of the following four: a matrix +with a zero row, a zero column, a zero 2x2 square or an antisymmetric matrix. +" +Continual Learning in Generative Adversarial Nets," Developments in deep generative models have allowed for tractable learning of +high-dimensional data distributions. While the employed learning procedures +typically assume that training data is drawn i.i.d. from the distribution of +interest, it may be desirable to model distinct distributions which are +observed sequentially, such as when different classes are encountered over +time. Although conditional variations of deep generative models permit multiple +distributions to be modeled by a single network in a disentangled fashion, they +are susceptible to catastrophic forgetting when the distributions are +encountered sequentially. In this paper, we adapt recent work in reducing +catastrophic forgetting to the task of training generative adversarial networks +on a sequence of distinct distributions, enabling continual generative +modeling. +" +Intrinsic Phase Diagram of Superconductivity in the BiCh2-based System Without In-plane Disorder," We have investigated the crystal structure and physical properties of +LaO1-xFxBiSSe to reveal the intrinsic superconductivity phase diagram of the +BiCh2-based layered compound family. From synchrotron X-ray diffraction and +Rietveld refinements with anisotropic displacement parameters, we clearly found +that the in-plane disorder in the BiSSe layer was fully suppressed for all x. +In LaO1-xFxBiSSe, metallic conductivity and superconductivity are suddenly +induced by electron doping even at x = 0.05 and 0.1 with a monoclinic +structure. In addition, x (F concentration) dependence of the transition +temperature (Tc) for x = 0.2-0.5 with a tetragonal structure shows an +anomalously flat phase diagram. With these experimental facts, we have proposed +the intrinsic phase diagram of the ideal BiCh2-based superconductors with less +in-plane disorder. +" +Growth dynamics of turbulent spots in plane Couette flow," We experimentally and numerically investigate the temporal aspects of +turbulent spots spreading in a plane Couette flow for transitional Reynolds +numbers between 300 and 450. Spot growth rate, spot advection rate and +large-scale flow intensity are measured as a function of time and Reynolds +number. All these quantities show similar dynamics clarifying the role played +by large-scale flows in the advection of the turbulent spot. The contributions +of each possible growth mechanism: growth induced by large scale advection or +growth by destabilization, are discussed for the different stages of the spot +growth. A scenario which gathers all these elements is providing a better +understanding of the growth dynamics of turbulent spots in plane Couette flow +that should possibly apply to other extended shear flows. +" +On the Distribution of Massive White Dwarfs and its Implication for Accretion-Induced Collapse," A White Dwarf (WD) star and a main-sequence companion may interact through +their different stellar evolution stages. This sort of binary population has +historically helped us improve our understanding of binary formation and +evolution scenarios. The data set used for the analysis consists of 115 +well-measured WD masses obtained by the Sloan Digital Sky Survey (SDSS). A +substantial fraction of these systems could potentially evolve and reach the +Chandrasekhar limit, and then undergo an Accretion-Induced Collapse (AIC) to +produce millisecond pulsars (MSPs). I focus my attention mainly on the massive +WDs (M_WD > 1M_sun), that are able to grow further by mass-transfer phase in +stellar binary systems to reach the Chandrasekhar mass. A mean value of M ~ +1.15 +/- 0.2M_sun is being derived. In the framework of the AIC process, such +systems are considered to be good candidates for the production of MSPs. The +implications of the results presented here to our understanding of binary MSPs +evolution are discussed. As a by-product of my work, I present an updated +distribution of all known pulsars in Galactic coordinates pattern. Keywords: +Stars; Neutron stars; White dwarfs; X-ray binaries; Fundamental parameters. +" +DeepBreath: Deep Learning of Breathing Patterns for Automatic Stress Recognition using Low-Cost Thermal Imaging in Unconstrained Settings," We propose DeepBreath, a deep learning model which automatically recognises +people's psychological stress level (mental overload) from their breathing +patterns. Using a low cost thermal camera, we track a person's breathing +patterns as temperature changes around his/her nostril. The paper's technical +contribution is threefold. First of all, instead of creating hand-crafted +features to capture aspects of the breathing patterns, we transform the +uni-dimensional breathing signals into two dimensional respiration variability +spectrogram (RVS) sequences. The spectrograms easily capture the complexity of +the breathing dynamics. Second, a spatial pattern analysis based on a deep +Convolutional Neural Network (CNN) is directly applied to the spectrogram +sequences without the need of hand-crafting features. Finally, a data +augmentation technique, inspired from solutions for over-fitting problems in +deep learning, is applied to allow the CNN to learn with a small-scale dataset +from short-term measurements (e.g., up to a few hours). The model is trained +and tested with data collected from people exposed to two types of cognitive +tasks (Stroop Colour Word Test, Mental Computation test) with sessions of +different difficulty levels. Using normalised self-report as ground truth, the +CNN reaches 84.59% accuracy in discriminating between two levels of stress and +56.52% in discriminating between three levels. In addition, the CNN +outperformed powerful shallow learning methods based on a single layer neural +network. Finally, the dataset of labelled thermal images will be open to the +community. +" +Hamiltonian and Algebraic Theories of Gapped Boundaries in Topological Phases of Matter," We present an exactly solvable lattice Hamiltonian to realize gapped +boundaries of Kitaev's quantum double models for Dijkgraaf-Witten theories. We +classify the elementary excitations on the boundary, and systematically +describe the bulk-to-boundary condensation procedure. We also present the +parallel algebraic/categorical structure of gapped boundaries. +" +Third-harmonic-generation of a diode laser for quantum control of beryllium ions," We generate coherent ultraviolet radiation at 313 nm as the third harmonic of +an external-cavity diode laser. We use this radiation for laser cooling of +trapped beryllium atomic ions and sympathetic cooling of co-trapped +beryllium-hydride molecular ions. An LBO crystal in an enhancement cavity +generates the second harmonic, and a BBO crystal in a doubly resonant +enhancement cavity mixes this second harmonic with the fundamental to produce +the third harmonic. Each enhancement cavity is preceded by a tapered amplifier +to increase the fundamental light. The 36-mW output power of this +all-semiconductor-gain system will enable quantum control of the beryllium +ions' motion. +" +Experimental Demonstration of the Sign Reversal of the Order Parameter in (Li1-xFex)OHFe1-yZnySe," Iron pnictides are the only known family of unconventional high-temperature +superconductors besides cuprates. Until recently, it was widely accepted that +superconductivity is spin-fluctuation driven and intimately related to their +fermiology, specifically, hole and electron pockets separated by the same wave +vector that characterizes the dominant spin fluctuations, and supporting order +parameters (OP) of opposite signs. This picture was questioned after the +discovery of a new family, based on the FeSe layers, either intercalated or in +the monolayer form. The critical temperatures there reach ~40 K, the same as in +optimally doped bulk FeSe - despite the fact that intercalation removes the +hole pockets from the Fermi level and, seemingly, undermines the basis for the +spin-fluctuation theory and the idea of a sign-changing OP. In this paper, +using the recently proposed phase-sensitive quasiparticle interference +technique, we show that in LiOH intercalated FeSe compound the OP does change +sign, albeit within the electronic pockets, and not between the hole and +electron ones. This result unifies the pairing mechanism of iron based +superconductors with or without the hole Fermi pockets and supports the +conclusion that spin fluctuations play the key role in electron pairing. +" +Defending against Intrusion of Malicious UAVs with Networked UAV Defense Swarms," Nowadays, companies such as Amazon, Alibaba, and even pizza chains are +pushing forward to use drones, also called UAVs (Unmanned Aerial Vehicles), for +service provision, such as package and food delivery. As governments intend to +use these immense economic benefits that UAVs have to offer, urban planners are +moving forward to incorporate so-called UAV flight zones and UAV highways in +their smart city designs. However, the high-speed mobility and behavior +dynamics of UAVs need to be monitored to detect and, subsequently, to deal with +intruders, rogue drones, and UAVs with a malicious intent. This paper proposes +a UAV defense system for the purpose of intercepting and escorting a malicious +UAV outside the flight zone. The proposed UAV defense system consists of a +defense UAV swarm, which is capable to self-organize its defense formation in +the event of intruder detection, and chase the malicious UAV as a networked +swarm. Modular design principles have been used for our fully localized +approach. We developed an innovative auto-balanced clustering process to +realize the intercept- and capture-formation. As it turned out, the resulting +networked defense UAV swarm is resilient against communication losses. Finally, +a prototype UAV simulator has been implemented. Through extensive simulations, +we show the feasibility and performance of our approach. +" +Core filling and snaking instability of dark solitons in spin-imbalanced superfluid Fermi gases," We use the time-dependent Bogoliubov de Gennes equations to study dark +solitons in three-dimensional spin-imbalanced superfluid Fermi gases. We +explore how the shape and dynamics of dark solitons are altered by the presence +of excess unpaired spins which fill their low-density core. The unpaired +particles broaden the solitons and suppress the transverse snake instability. +We discuss ways of observing these phenomena in cold atom experiments. +" +Gaia-ESO Survey: global properties of clusters Trumpler 14 and 16 in the Carina Nebula," We present the first extensive spectroscopic study of the global population +in star clusters Trumpler~16, Trumpler~14 and Collinder~232 in the Carina +Nebula, using data from the Gaia-ESO Survey, down to solar-mass stars. In +addition to the standard homogeneous Survey data reduction, a special +processing was applied here because of the bright nebulosity surrounding Carina +stars. We find about four hundred good candidate members ranging from OB types +down to slightly sub-solar masses. About one-hundred heavily-reddened +early-type Carina members found here were previously unrecognized or poorly +classified, including two candidate O stars and several candidate Herbig Ae/Be +stars. Their large brightness makes them useful tracers of the obscured Carina +population. The spectroscopically-derived temperatures for nearly 300 low-mass +members allows the inference of individual extinction values, and the study of +the relative placement of stars along the line of sight. We find a complex +spatial structure, with definite clustering of low-mass members around the most +massive stars, and spatially-variable extinction. By combining the new data +with existing X-ray data we obtain a more complete picture of the +three-dimensional spatial structure of the Carina clusters, and of their +connection to bright and dark nebulosity, and UV sources. The identification of +tens of background giants enables us also to determine the total optical depth +of the Carina nebula along many sightlines. We are also able to put constraints +on the star-formation history of the region, with Trumpler~14 stars found to be +systematically younger than stars in other sub-clusters. We find a large +percentage of fast-rotating stars among Carina solar-mass members, which +provide new constraints on the rotational evolution of pre-main-sequence stars +in this mass range. +" +Stochastic Deep Learning in Memristive Networks," We study the performance of stochastically trained deep neural networks +(DNNs) whose synaptic weights are implemented using emerging memristive devices +that exhibit limited dynamic range, resolution, and variability in their +programming characteristics. We show that a key device parameter to optimize +the learning efficiency of DNNs is the variability in its programming +characteristics. DNNs with such memristive synapses, even with dynamic range as +low as $15$ and only $32$ discrete levels, when trained based on stochastic +updates suffer less than $3\%$ loss in accuracy compared to floating point +software baseline. We also study the performance of stochastic memristive DNNs +when used as inference engines with noise corrupted data and find that if the +device variability can be minimized, the relative degradation in performance +for the Stochastic DNN is better than that of the software baseline. Hence, our +study presents a new optimization corner for memristive devices for building +large noise-immune deep learning systems. +" +Stepwise regression for unsupervised learning," I consider unsupervised extensions of the fast stepwise linear regression +algorithm \cite{efroymson1960multiple}. These extensions allow one to +efficiently identify highly-representative feature variable subsets within a +given set of jointly distributed variables. This in turn allows for the +efficient dimensional reduction of large data sets via the removal of redundant +features. Fast search is effected here through the avoidance of repeat +computations across trial fits, allowing for a full representative-importance +ranking of a set of feature variables to be carried out in $O(n^2 m)$ time, +where $n$ is the number of variables and $m$ is the number of data samples +available. This runtime complexity matches that needed to carry out a single +regression and is $O(n^2)$ faster than that of naive implementations. I present +pseudocode suitable for efficient forward, reverse, and forward-reverse +unsupervised feature selection. To illustrate the algorithm's application, I +apply it to the problem of identifying representative stocks within a given +financial market index -- a challenge relevant to the design of Exchange Traded +Funds (ETFs). I also characterize the growth of numerical error with iteration +step in these algorithms, and finally demonstrate and rationalize the +observation that the forward and reverse algorithms return exactly inverted +feature orderings in the weakly-correlated feature set regime. +" +Stochastic equivalence for performance analysis of concurrent systems in dtsiPBC," We propose an extension with immediate multiactions of discrete time +stochastic Petri Box Calculus (dtsPBC), presented by I.V. Tarasyuk. The +resulting algebra dtsiPBC is a discrete time analogue of stochastic Petri Box +Calculus (sPBC) with immediate multiactions, designed by H. Macià, V. Valero +et al. within a continuous time domain. The step operational semantics is +constructed via labeled probabilistic transition systems. The denotational +semantics is based on labeled discrete time stochastic Petri nets with +immediate transitions. To evaluate performance, the corresponding semi-Markov +chains are analyzed. We define step stochastic bisimulation equivalence of +expressions that is applied to reduce their transition systems and underlying +semi-Markov chains while preserving the functionality and performance +characteristics. We explain how this equivalence can be used to simplify +performance analysis of the algebraic processes. In a case study, a method of +modeling, performance evaluation and behaviour reduction for concurrent systems +is outlined and applied to the shared memory system. +" +$O(N)$ Iterative and $O(NlogN)$ Fast Direct Volume Integral Equation Solvers with a Minimal-Rank ${\cal H}^2$-Representation for Large-Scale $3$-D Electrodynamic Analysis," Linear complexity iterative and log-linear complexity direct solvers are +developed for the volume integral equation (VIE) based general large-scale +electrodynamic analysis. The dense VIE system matrix is first represented by a +new cluster-based multilevel low-rank representation. In this representation, +all the admissible blocks associated with a single cluster are grouped together +and represented by a single low-rank block, whose rank is minimized based on +prescribed accuracy. From such an initial representation, an efficient +algorithm is developed to generate a minimal-rank ${\cal H}^2$-matrix +representation. This representation facilitates faster computation, and ensures +the same minimal rank's growth rate with electrical size as evaluated from +singular value decomposition. Taking into account the rank's growth with +electrical size, we develop linear-complexity ${\cal H}^2$-matrix-based storage +and matrix-vector multiplication, and thereby an $O(N)$ iterative VIE solver +regardless of electrical size. Moreover, we develop an $O(NlogN)$ matrix +inversion, and hence a fast $O(NlogN)$ \emph{direct} VIE solver for large-scale +electrodynamic analysis. Both theoretical analysis and numerical simulations of +large-scale $1$-, $2$- and $3$-D structures on a single-core CPU, resulting in +millions of unknowns, have demonstrated the low complexity and superior +performance of the proposed VIE electrodynamic solvers. %The algorithms +developed in this work are kernel-independent, and hence applicable to other IE +operators as well. +" +Optimizing Cost-Sensitive SVM for Imbalanced Data :Connecting Cluster to Classification," Class imbalance is one of the challenging problems for machine learning in +many real-world applications, such as coal and gas burst accident monitoring: +the burst premonition data is extreme smaller than the normal data, however, +which is the highlight we truly focus on. Cost-sensitive adjustment approach is +a typical algorithm-level method resisting the data set imbalance. For SVMs +classifier, which is modified to incorporate varying penalty parameter(C) for +each of considered groups of examples. However, the C value is determined +empirically, or is calculated according to the evaluation metric, which need to +be computed iteratively and time consuming. This paper presents a novel +cost-sensitive SVM method whose penalty parameter C optimized on the basis of +cluster probability density function(PDF) and the cluster PDF is estimated only +according to similarity matrix and some predefined hyper-parameters. +Experimental results on various standard benchmark data sets and real-world +data with different ratios of imbalance show that the proposed method is +effective in comparison with commonly used cost-sensitive techniques. +" +Plain stopping time and conditional complexities revisited," In this paper we analyze the notion of ""stopping time complexity"", informally +defined as the amount of information needed to specify when to stop while +reading an infinite sequence. This notion was introduced by Vovk and Pavlovic +(2016). It turns out that plain stopping time complexity of a binary string $x$ +could be equivalently defined as (a) the minimal plain complexity of a Turing +machine that stops after reading $x$ on a one-directional input tape; (b) the +minimal plain complexity of an algorithm that enumerates a prefix-free set +containing $x$; (c)~the conditional complexity $C(x|x*)$ where $x$ in the +condition is understood as a prefix of an infinite binary sequence while the +first $x$ is understood as a terminated binary string; (d) as a minimal upper +semicomputable function $K$ such that each binary sequence has at most $2^n$ +prefixes $z$ such that $K(z)3$, $\Aut(P_n)$ is generated by the +subgroup $\Aut_c(P_n)$ of central automorphisms of $P_n$, the subgroup +$\Aut(B_n)$ of restrictions of automorphisms of $B_n$ on $P_n$ and one extra +automorphism $w_n$. We also investigate the lifting and extension problem for +automorphisms of some well-known exact sequences arising from braid groups, and +prove that that answers are negative in most cases. Specifically, we prove that +no non-trivial central automorphism of $P_n$ can be extended to an automorphism +of $B_n$. +" +Spin phonon interactions and magneto-thermal transport behavior in p-Si," The spin-phonon interaction is the dominant process for spin relaxation in +Si, and as thermal transport in Si is dominated by phonons, one would expect +spin polarization to influence Si's thermal conductivity. Here we report the +experimental evidence of just such a coupling. We have performed concurrent +measurements of spin, charge, and phonon transport in p-doped Si across a wide +range of temperatures. In an experimental system of a freestanding two um p-Si +beam coated on one side with a thin (25 nm) ferromagnetic spin injection layer, +we use the self-heating 3 omega method to measure changes in electrical and +thermal conductivity under the influence of a magnetic field. These +magneto-thermal transport measurements reveal signatures in the variation of +electrical and thermal transport that are consistent with spin-phonon +interaction. Raman spectroscopy measurements and first principle's calculations +support that these variations are due to spin-phonon interaction. Spin +polarization leads to softening of phonon modes, a reduction in the group +velocity of acoustic modes, and a subsequent decrease in thermal conductivity +at room temperature. Moreover, magneto-thermal transport measurements as a +function of temperature indicate a change in the spin-phonon relaxation +behavior at low temperature. +" +Benchmark of dynamic electron correlation models for seniority-zero wavefunctions and their application to thermochemistry," Wavefunctions restricted to electron-pair states are promising models to +describe static/nondynamic electron correlation effects encountered, for +instance, in bond-dissociation processes and transition-metal and actinide +chemistry. To reach spectroscopic accuracy, however, the missing dynamic +electron correlation effects that cannot be described by electron-pair states +need to be included \textit{a posteriori}. In this article, we extend the +previously presented perturbation theory models with an Antisymmetric Product +of 1-reference orbital Geminal (AP1roG) reference function that allow us to +describe both static/nondynamic and dynamic electron correlation effects. +Specifically, our perturbation theory models combine a diagonal and +off-diagonal zero-order Hamiltonian, a single-reference and multi-reference +dual state, and different excitation operators used to construct the projection +manifold. We benchmark all proposed models as well as an \textit{a posteriori} +linearized coupled cluster correction on top of AP1roG against CR-CCSD(T) +reference data for reaction energies of several closed-shell molecules that are +extrapolated to the basis set limit. Moreover, we test the performance of our +new methods for multiple bond breaking processes in the N$_2$, C$_2$, and BN +dimers against MRCI-SD and MRCI-SD+Q reference data. Our numerical results +indicate that the best performance is obtained from a linearized coupled +cluster correction as well as second-order perturbation theory corrections +employing a diagonal and off-diagonal zero-order Hamiltonian and a +single-determinant dual state. These dynamic corrections on top of AP1roG allow +us to reliably model molecular systems dominated by static/nondynamic as well +as dynamic electron correlation. +" +Noncommutative resolutions of discriminants," We give an introduction to the McKay correspondence and its connection to +quotients of $\mathbb{C}^n$ by finite reflection groups. This yields a natural +construction of noncommutative resolutions of the discriminants of these +reflection groups. This paper is an extended version of E.F.'s talk with the +same title delivered at the ICRA. +" +Machine Learning $\mathbb{Z}_{2}$ Quantum Spin Liquids with Quasi-particle Statistics," After decades of progress and effort, obtaining a phase diagram for a +strongly-correlated topological system still remains a challenge. Although in +principle one could turn to Wilson loops and long-range entanglement, +evaluating these non-local observables at many points in phase space can be +prohibitively costly. With growing excitement over topological quantum +computation comes the need for an efficient approach for obtaining topological +phase diagrams. Here we turn to machine learning using quantum loop topography +(QLT), a notion we have recently introduced. Specifically, we propose a +construction of QLT that is sensitive to quasi-particle statistics. We then use +mutual statistics between the spinons and visons to detect a $\mathbb{Z}_{2}$ +quantum spin liquid in a multi-parameter phase space. We successfully obtain +the quantum phase boundary between the topological and trivial phases using a +simple feed-forward neural network. Furthermore, we demonstrate advantages of +our approach for the evaluation of phase diagrams relating to speed and +storage. Such statistics-based machine learning of topological phases opens new +efficient routes to studying topological phase diagrams in strongly correlated +systems. +" +Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation," In Multimodal Neural Machine Translation (MNMT), a neural model generates a +translated sentence that describes an image, given the image itself and one +source descriptions in English. This is considered as the multimodal image +caption translation task. The images are processed with Convolutional Neural +Network (CNN) to extract visual features exploitable by the translation model. +So far, the CNNs used are pre-trained on object detection and localization +task. We hypothesize that richer architecture, such as dense captioning models, +may be more suitable for MNMT and could lead to improved translations. We +extend this intuition to the word-embeddings, where we compute both linguistic +and visual representation for our corpus vocabulary. We combine and compare +different confi +" +Delay-time distribution of core-collapse supernovae with late events resulting from binary interaction," Most massive stars, the progenitors of core-collapse supernovae, are in close +binary systems and may interact with their companion through mass transfer or +merging. We undertake a population synthesis study to compute the delay-time +distribution of core-collapse supernovae, that is, the supernova rate versus +time following a starburst, taking into account binary interactions. We test +the systematic robustness of our results by running various simulations to +account for the uncertainties in our standard assumptions. We find that a +significant fraction, $15^{+9}_{-8}$%, of core-collapse supernovae are `late', +that is, they occur 50-200 Myrs after birth, when all massive single stars have +already exploded. These late events originate predominantly from binary systems +with at least one, or, in most cases, with both stars initially being of +intermediate mass ($4-8M_{\odot}$). The main evolutionary channels that +contribute often involve either the merging of the initially more massive +primary star with its companion or the engulfment of the remaining core of the +primary by the expanding secondary that has accreted mass at an earlier +evolutionary stage. Also, the total number of core-collapse supernovae +increases by $14^{+15}_{-14}$% because of binarity for the same initial stellar +mass. The high rate implies that we should have already observed such late +core-collapse supernovae, but have not recognized them as such. We argue that +$\phi$ Persei is a likely progenitor and that eccentric neutron star - white +dwarf systems are likely descendants. Late events can help explain the +discrepancy in the delay-time distributions derived from supernova remnants in +the Magellanic Clouds and extragalactic type Ia events, lowering the +contribution of prompt Ia events. We discuss ways to test these predictions and +speculate on the implications for supernova feedback in simulations of galaxy +evolution. +" +Ignore or Comply? On Breaking Symmetry in Consensus," We study consensus processes on the complete graph of $n$ nodes. Initially, +each node supports one from up to n opinions. Nodes randomly and in parallel +sample the opinions of constant many nodes. Based on these samples, they use an +update rule to change their own opinion. +The goal is to reach consensus, a configuration where all nodes support the +same opinion. We compare two well-known update rules: 2-Choices and 3-Majority. +In the former, each node samples two nodes and adopts their opinion if they +agree. In the latter, each node samples three nodes: If an opinion is supported +by at least two samples the node adopts it, otherwise it randomly adopts one of +the sampled opinions. Known results for these update rules focus on initial +configurations with a limited number of colors (say $n^{1/3}$ ), or typically +assume a bias, where one opinion has a much larger support than any other. For +such biased configurations, the time to reach consensus is roughly the same for +2-Choices and 3-Majority. +Interestingly, we prove that this is no longer true for configurations with a +large number of initial colors. In particular, we show that 3-Majority reaches +consensus with high probability in $O(n^{3/4}\log^{7/8}n)$ rounds, while +2-Choices can need $\Omega(n/\log n)$ rounds. We thus get the first +unconditional sublinear bound for 3-Majority and the first result separating +the consensus time of these processes. Along the way, we develop a framework +that allows a fine-grained comparison between consensus processes from a +specific class. We believe that this framework might help to classify the +performance of more consensus processes. +" +End to End Vehicle Lateral Control Using a Single Fisheye Camera," Convolutional neural networks are commonly used to control the steering angle +for autonomous cars. Most of the time, multiple long range cameras are used to +generate lateral failure cases. In this paper we present a novel model to +generate this data and label augmentation using only one short range fisheye +camera. We present our simulator and how it can be used as a consistent metric +for lateral end-to-end control evaluation. Experiments are conducted on a +custom dataset corresponding to more than 10000 km and 200 hours of open road +driving. Finally we evaluate this model on real world driving scenarios, open +road and a custom test track with challenging obstacle avoidance and sharp +turns. In our simulator based on real-world videos, the final model was capable +of more than 99% autonomy on urban road +" +Metrics for Deep Generative Models," Neural samplers such as variational autoencoders (VAEs) or generative +adversarial networks (GANs) approximate distributions by transforming samples +from a simple random source---the latent space---to samples from a more complex +distribution represented by a dataset. While the manifold hypothesis implies +that the density induced by a dataset contains large regions of low density, +the training criterions of VAEs and GANs will make the latent space densely +covered. Consequently points that are separated by low-density regions in +observation space will be pushed together in latent space, making stationary +distances poor proxies for similarity. We transfer ideas from Riemannian +geometry to this setting, letting the distance between two points be the +shortest path on a Riemannian manifold induced by the transformation. The +method yields a principled distance measure, provides a tool for visual +inspection of deep generative models, and an alternative to linear +interpolation in latent space. In addition, it can be applied for robot +movement generalization using previously learned skills. The method is +evaluated on a synthetic dataset with known ground truth; on a simulated robot +arm dataset; on human motion capture data; and on a generative model of +handwritten digits. +" +Posterior Inference for Sparse Hierarchical Non-stationary Models," Gaussian processes are valuable tools for non-parametric modelling, where +typically an assumption of stationarity is employed. While removing this +assumption can improve prediction, fitting such models is challenging. In this +work, hierarchical models are constructed based on Gaussian Markov random +fields with stochastic spatially varying parameters. Importantly, this allows +for non-stationarity while also addressing the computational burden through a +sparse representation of the precision matrix. The prior field is chosen to be +Matérn, and two hyperpriors, for the spatially varying parameters, are +considered. One hyperprior is Ornstein-Uhlenbeck, formulated through an +autoregressive process. The other corresponds to the widely used squared +exponential. In this setting, efficient Markov chain Monte Carlo (MCMC) +sampling is challenging due to the strong coupling a posteriori of the +parameters and hyperparameters. We develop and compare three MCMC schemes, +which are adaptive and therefore free of parameter tuning. Furthermore, a novel +extension to higher-dimensional settings is proposed through an additive +structure that retains the flexibility and scalability of the model, while also +inheriting interpretability from the additive approach. A thorough assessment +of the ability of the methods to efficiently explore the posterior distribution +and to account for non-stationarity is presented, in both simulated experiments +and a real-world computer emulation problem. +" +A Semiotics-inspired Domain-Specific Modeling Language for Complex Event Processing Rules," Complex Event Processing (CEP) is one technique used to the handling data +flows. It allows pre-establishing conditions through rules and firing events +when certain patterns are found in the data flows. Because the rules for +defining such patterns are expressed with specific languages, users of these +technologies must understand the underlying expression syntax. To reduce the +complexity of writing CEP rules, some researchers are employing Domain Specific +Modeling Language (DSML) to provide modelling through visual tools. However, +existing approaches are ignoring some user design techniques that facilitate +usability. Thus, resulting tools eventually has become more complexes for +handling CEP than the conventional usage. Also, research on DSML tools +targeting CEP does not present any evaluation around usability. This article +proposes a DSML combined with visual notations techniques to create CEP rules +with a more intuitive development model adapted for the non-expert user needs. +The resulting tool was evaluated by non-expert users that were capable of +easily creating CEP rules without prior knowledge of the underlying expression +language. +" +Fairly Dividing a Cake after Some Parts Were Burnt in the Oven," There is a heterogeneous resource that contains both good parts and bad +parts, for example, a cake with some parts burnt, a land-estate with some parts +heavily taxed, or a chore with some parts fun to do. The resource has to be +divided fairly among $n$ agents with different preferences, each of whom has a +personal value-density function on the resource. The value-density functions +can accept any real value --- positive, negative or zero. Each agent should +receive a connected piece and no agent should envy another agent. We prove that +such a division exists for 3 agents and present preliminary positive results +for larger numbers of agents. +" +"Atomic vapor as a source of tunable, non-Gaussian self-reconstructing optical modes"," In this manuscript, we demonstrate the ability of nonlinear light-atom +interactions to produce tunably non-Gaussian, partially self-healing optical +modes. Gaussian spatial-mode light tuned near to the atomic resonances in hot +rubidium vapor is shown to result in non-Gaussian output mode structures that +may be controlled by varying either the input beam power or the temperature of +the atomic vapor. We show that the output modes exhibit a degree of +self-reconstruction after encountering an obstruction in the beam path. The +resultant modes are similar to truncated Bessel-Gauss modes that exhibit the +ability to self-reconstruct earlier upon propagation than Gaussian modes. The +ability to generate tunable, self-reconstructing beams has potential +applications to a variety of imaging and communication scenarios. +" +Understanding Performance of Edge Content Caching for Mobile Video Streaming," Today's Internet has witnessed an increase in the popularity of mobile video +streaming, which is expected to exceed 3/4 of the global mobile data traffic by +2019. To satisfy the considerable amount of mobile video requests, video +service providers have been pushing their content delivery infrastructure to +edge networks--from regional CDN servers to peer CDN servers (e.g., +smartrouters in users' homes)--to cache content and serve users with storage +and network resources nearby. Among the edge network content caching paradigms, +Wi-Fi access point caching and cellular base station caching have become two +mainstream solutions. Thus, understanding the effectiveness and performance of +these solutions for large-scale mobile video delivery is important. However, +the characteristics and request patterns of mobile video streaming are unclear +in practical wireless network. In this paper, we use real-world datasets +containing 50 million trace items of nearly 2 million users viewing more than +0.3 million unique videos using mobile devices in a metropolis in China over 2 +weeks, not only to understand the request patterns and user behaviors in mobile +video streaming, but also to evaluate the effectiveness of Wi-Fi and +cellular-based edge content caching solutions. To understand performance of +edge content caching for mobile video streaming, we first present temporal and +spatial video request patterns, and we analyze their impacts on caching +performance using frequency-domain and entropy analysis approaches. We then +study the behaviors of mobile video users, including their mobility and +geographical migration behaviors. Using trace-driven experiments, we compare +strategies for edge content caching including LRU and LFU, in terms of +supporting mobile video requests. Moreover, we design an efficient caching +strategy based on the measurement insights and experimentally evaluate its +performance. +" +Combination of Hidden Markov Random Field and Conjugate Gradient for Brain Image Segmentation," Image segmentation is the process of partitioning the image into significant +regions easier to analyze. Nowadays, segmentation has become a necessity in +many practical medical imaging methods as locating tumors and diseases. Hidden +Markov Random Field model is one of several techniques used in image +segmentation. It provides an elegant way to model the segmentation process. +This modeling leads to the minimization of an objective function. Conjugate +Gradient algorithm (CG) is one of the best known optimization techniques. This +paper proposes the use of the Conjugate Gradient algorithm (CG) for image +segmentation, based on the Hidden Markov Random Field. Since derivatives are +not available for this expression, finite differences are used in the CG +algorithm to approximate the first derivative. The approach is evaluated using +a number of publicly available images, where ground truth is known. The Dice +Coefficient is used as an objective criterion to measure the quality of +segmentation. The results show that the proposed CG approach compares favorably +with other variants of Hidden Markov Random Field segmentation algorithms. +" +Layer-wise Relevance Propagation for Explainable Recommendations," In this paper, we tackle the problem of explanations in a deep-learning based +model for recommendations by leveraging the technique of layer-wise relevance +propagation. We use a Deep Convolutional Neural Network to extract relevant +features from the input images before identifying similarity between the images +in feature space. Relationships between the images are identified by the model +and layer-wise relevance propagation is used to infer pixel-level details of +the images that may have significantly informed the model's choice. We evaluate +our method on an Amazon products dataset and demonstrate the efficacy of our +approach. +" +Compressed channeled spectropolarimetry," Channeled spectropolarimetry measures the spectrally resolved Stokes +parameters. A key aspect of this technique is to accurately reconstruct the +Stokes parameters from a modulated measurement of the channeled +spectropolarimeter. The state-of-the-art reconstruction algorithm uses the +Fourier transform to extract the Stokes parameters from channels in the Fourier +domain. While this approach is straightforward, it can be sensitive to noise +and channel cross-talk, and it imposes bandwidth limitations that cut off high +frequency details. To overcome these drawbacks, we present a reconstruction +method called compressed channeled spectropolarimetry. In our proposed +framework, reconstruction in channeled spectropolarimetry is an underdetermined +problem, where we take N measurements and solve for 3N unknown Stokes +parameters. We formulate an optimization problem by creating a mathematical +model of the channeled spectropolarimeter with inspiration from compressed +sensing. We show that our approach offers greater noise robustness and +reconstruction accuracy compared with the Fourier transform technique in +simulations and experimental measurements. By demonstrating more accurate +reconstructions, we push performance to the native resolution of the sensor, +allowing more information to be recovered from a single measurement of a +channeled spectropolarimeter. +" +Limits for Rumor Spreading in stochastic populations," Biological systems can share and collectively process information to yield +emergent effects, despite inherent noise in communication. While man-made +systems often employ intricate structural solutions to overcome noise, the +structure of many biological systems is more amorphous. It is not well +understood how communication noise may affect the computational repertoire of +such groups. To approach this question we consider the basic collective task of +rumor spreading, in which information from few knowledgeable sources must +reliably flow into the rest of the population. +In order to study the effect of communication noise on the ability of groups +that lack stable structures to efficiently solve this task, we consider a noisy +version of the uniform PULL model. We prove a lower bound which implies that, +in the presence of even moderate levels of noise that affect all facets of the +communication, no scheme can significantly outperform the trivial one in which +agents have to wait until directly interacting with the sources. Our results +thus show an exponential separation between the uniform PUSH and PULL +communication models in the presence of noise. Such separation may be +interpreted as suggesting that, in order to achieve efficient rumor spreading, +a system must exhibit either some degree of structural stability or, +alternatively, some facet of the communication which is immune to noise. +We corroborate our theoretical findings with a new analysis of experimental +data regarding recruitment in Cataglyphis niger desert ants. +" +Approximability of Discriminators Implies Diversity in GANs," While Generative Adversarial Networks (GANs) have empirically produced +impressive results on learning complex real-world distributions, recent work +has shown that they suffer from lack of diversity or mode collapse. The +theoretical work of Arora et al. suggests a dilemma about GANs' statistical +properties: powerful discriminators cause overfitting, whereas weak +discriminators cannot detect mode collapse. +In contrast, we show in this paper that GANs can in principle learn +distributions in Wasserstein distance (or KL-divergence in many cases) with +polynomial sample complexity, if the discriminator class has strong +distinguishing power against the particular generator class (instead of against +all possible generators). For various generator classes such as mixture of +Gaussians, exponential families, and invertible neural networks generators, we +design corresponding discriminators (which are often neural nets of specific +architectures) such that the Integral Probability Metric (IPM) induced by the +discriminators can provably approximate the Wasserstein distance and/or +KL-divergence. This implies that if the training is successful, then the +learned distribution is close to the true distribution in Wasserstein distance +or KL divergence, and thus cannot drop modes. Our preliminary experiments show +that on synthetic datasets the test IPM is well correlated with KL divergence, +indicating that the lack of diversity may be caused by the sub-optimality in +optimization instead of statistical inefficiency. +" +Identifying Nonlinear 1-Step Causal Influences in Presence of Latent Variables," We propose an approach for learning the causal structure in stochastic +dynamical systems with a $1$-step functional dependency in the presence of +latent variables. We propose an information-theoretic approach that allows us +to recover the causal relations among the observed variables as long as the +latent variables evolve without exogenous noise. We further propose an +efficient learning method based on linear regression for the special sub-case +when the dynamics are restricted to be linear. We validate the performance of +our approach via numerical simulations. +" +The discovery of tidal tails around the globular cluster NGC 7492 with Pan-STARRS1," We report the discovery of tidal tails around the Galactic globular cluster +NGC 7492, based on the Data Release 1 of the Pan-STARRS 1 survey. The tails +were detected with a version of the matched filter technique applied to the +$(g-r,r)$ and $(g-i,i)$ color-magnitude diagrams. Tidal tails emerging from the +cluster extend at least $\sim$3.5 degrees in the North-East to South-East +direction, equivalent to $\sim1.5$ kpc in projected length. +" +Strong 2.t and Strong 3.t Transformations for Strong M-equivalence," Parikh matrices have been extensively investigated due to their usefulness in +studying subword occurrences in words. Due to the dependency of Parikh matrices +on the ordering of the alphabet, strong M-equivalence was proposed as an +order-independent alternative to M-equivalence in studying words possessing the +same Parikh matrix. This paper introduces and studies the notions of strong 2.t +and strong 3.t transformations in determining when two ternary words are +strongly M-equivalent. The irreducibility of strong 2.t transformations are +then scrutinized, exemplified by a structural characterization of irreducible +strong 2.2 transformations. The common limitation of these transformations in +characterizing strong M-equivalence is then addressed. +" +Integrals of eigenfunctions over curves in surfaces of nonpositive curvature," Let $(M,g)$ be a compact, 2-dimensional Riemannian manifold with nonpositive +sectional curvature. Let $\Delta_g$ be the Laplace-Beltrami operator +corresponding to the metric $g$ on $M$, and let $e_\lambda$ be $L^2$-normalized +eigenfunctions of $\Delta_g$ with eigenvalue $\lambda$, i.e. \[ -\Delta_g +e_\lambda = \lambda^2 e_\lambda. \] We prove \[ \left| \int_{\mathbb R} b(t) +e_\lambda (\gamma(t)) \, dt \right| = o(1) \quad \text{ as } \lambda \to \infty +\] where $b$ is a smooth, compactly supported function on $\mathbb R$ and +$\gamma$ is a curve parametrized by arc-length whose geodesic curvature +$\kappa(\gamma(t))$ avoids two critical curvatures $\mathbf +k(\gamma'^\perp(t))$ and $\mathbf k(-\gamma'^{\perp}(t))$ for each $t \in +\operatorname{supp} b$. $\mathbf k(v)$ denotes the curvature of a circle with +center taken to infinity along the geodesic ray in direction $-v$. +" +Wavenet based low rate speech coding," Traditional parametric coding of speech facilitates low rate but provides +poor reconstruction quality because of the inadequacy of the model used. We +describe how a WaveNet generative speech model can be used to generate high +quality speech from the bit stream of a standard parametric coder operating at +2.4 kb/s. We compare this parametric coder with a waveform coder based on the +same generative model and show that approximating the signal waveform incurs a +large rate penalty. Our experiments confirm the high performance of the WaveNet +based coder and show that the speech produced by the system is able to +additionally perform implicit bandwidth extension and does not significantly +impair recognition of the original speaker for the human listener, even when +that speaker has not been used during the training of the generative model. +" +On Synchronization of Dynamical Systems over Directed Switching Topologies: An Algebraic and Geometric Perspective," In this paper, we aim to investigate the synchronization problem of dynamical +systems, which can be of generic linear or Lipschitz nonlinear type, +communicating over directed switching network topologies. A mild connectivity +assumption on the switching topologies is imposed, which allows them to be +directed and jointly connected. We propose a novel analysis framework from both +algebraic and geometric perspectives to justify the attractiveness of the +synchronization manifold. Specifically, it is proven that the complementary +space of the synchronization manifold can be spanned by certain subspaces. +These subspaces can be the eigenspaces of the nonzero eigenvalues of Laplacian +matrices in linear case. They can also be subspaces in which the projection of +the nonlinear self-dynamics still retains the Lipschitz property. This allows +to project the states of the dynamical systems into these subspaces and +transform the synchronization problem under consideration equivalently into a +convergence one of the projected states in each subspace. Then, assuming the +joint connectivity condition on the communication topologies, we are able to +work out a simple yet effective and unified convergence analysis for both types +of dynamical systems. More specifically, for partial-state coupled generic +linear systems, it is proven that synchronization can be reached if an extra +condition, which is easy to verify in several cases, on the system dynamics is +satisfied. For Lipschitz-type nonlinear systems with positive-definite inner +coupling matrix, synchronization is realized if the coupling strength is strong +enough to stabilize the evolution of the projected states in each subspace +under certain conditions. +" +"Independence, Conditionality and Structure of Dempster-Shafer Belief Functions"," Several approaches of structuring (factorization, decomposition) of +Dempster-Shafer joint belief functions from literature are reviewed with +special emphasis on their capability to capture independence from the point of +view of the claim that belief functions generalize bayes notion of probability. +It is demonstrated that Zhu and Lee's {Zhu:93} logical networks and Smets' +{Smets:93} directed acyclic graphs are unable to capture statistical +dependence/independence of bayesian networks {Pearl:88}. On the other hand, +though Shenoy and Shafer's hypergraphs can explicitly represent bayesian +network factorization of bayesian belief functions, they disclaim any need for +representation of independence of variables in belief functions. +Cano et al. {Cano:93} reject the hypergraph representation of Shenoy and +Shafer just on grounds of missing representation of variable independence, but +in their frameworks some belief functions factorizable in Shenoy/Shafer +framework cannot be factored. +The approach in {Klopotek:93f} on the other hand combines the merits of both +Cano et al. and of Shenoy/Shafer approach in that for Shenoy/Shafer approach no +simpler factorization than that in {Klopotek:93f} approach exists and on the +other hand all independences among variables captured in Cano et al. framework +and many more are captured in {Klopotek:93f} approach.% +" +Wave Manipulations by Coherent Perfect Channeling," We report experimental and theoretical investigations of coherent perfect +channeling (CPC), a process that two incoming coherent waves in waveguides are +completely channeled into one or two other waveguides with little energy +dissipation via strong coherent interaction between the two waves mediated by a +deep subwavelength dimension scatterer at the common junction of the +waveguides. Two such scatterers for acoustic waves are discovered, one +confirmed by experiments and the other predicted by theory, and their +scattering matrices are formulated. Scatterers with other CPC scattering +matrices are explored, and preliminary investigations of their properties are +conducted. The scattering matrix formulism makes it possible to extend the +applicable domain of CPC to other scalar waves, such as electromagnetic waves +and quantum wavefunctions. +" +The Coldest Place in the Universe: Probing the Ultra-Cold Outflow and Dusty Disk in the Boomerang Nebula," Our Cycle 0 ALMA observations confirmed that the Boomerang Nebula is the +coldest known object in the Universe, with a massive high-speed outflow that +has cooled significantly below the cosmic background temperature. Our new CO +1-0 data reveal heretofore unseen distant regions of this ultra-cold outflow, +out to $\gtrsim120,000$ AU. We find that in the ultra-cold outflow, the +mass-loss rate (dM/dt) increases with radius, similar to its expansion velocity +($V$) - taking $V\propto r$, we find $dM/dt \propto r^{0.9-2.2}$. The mass in +the ultra-cold outflow is $\gtrsim3.3$ Msun, and the Boomerang's main-sequence +progenitor mass is $\gtrsim4$ Msun. Our high angular resolution ($\sim$0"".3) CO +J=3-2 map shows the inner bipolar nebula's precise, highly-collimated shape, +and a dense central waist of size (FWHM) $\sim$1740 AU$\times275$ AU. The +molecular gas and the dust as seen in scattered light via optical HST imaging +show a detailed correspondence. The waist shows a compact core in thermal dust +emission at 0.87-3.3 mm, which harbors $(4-7)\times10^{-4}$ Msun~of very large +($\sim$mm-to-cm sized), cold ($\sim20-30$ K) grains. The central waist +(assuming its outer regions to be expanding) and fast bipolar outflow have +expansion ages of $\lesssim1925$ yr and $\le1050$ yr: the ""jet-lag"" (i.e., +torus age minus the fast-outflow age) in the Boomerang supports models in which +the primary star interacts directly with a binary companion. We argue that this +interaction resulted in a common-envelope configuration while the Boomerang's +primary was an RGB or early-AGB star, with the companion finally merging into +the primary's core, and ejecting the primary's envelope that now forms the +ultra-cold outflow. +" +The CARMENES search for exoplanets around M dwarfs: High-resolution optical and near-infrared spectroscopy of 324 survey stars," The CARMENES radial velocity (RV) survey is observing 324 M dwarfs to search +for any orbiting planets. In this paper, we present the survey sample by +publishing one CARMENES spectrum for each M dwarf. These spectra cover the +wavelength range 520--1710nm at a resolution of at least $R > 80,000$, and we +measure its RV, H$\alpha$ emission, and projected rotation velocity. We present +an atlas of high-resolution M-dwarf spectra and compare the spectra to +atmospheric models. To quantify the RV precision that can be achieved in +low-mass stars over the CARMENES wavelength range, we analyze our empirical +information on the RV precision from more than 6500 observations. We compare +our high-resolution M-dwarf spectra to atmospheric models where we determine +the spectroscopic RV information content, $Q$, and signal-to-noise ratio. We +find that for all M-type dwarfs, the highest RV precision can be reached in the +wavelength range 700--900nm. Observations at longer wavelengths are equally +precise only at the very latest spectral types (M8 and M9). We demonstrate that +in this spectroscopic range, the large amount of absorption features +compensates for the intrinsic faintness of an M7 star. To reach an RV precision +of 1ms$^{-1}$ in very low mass M dwarfs at longer wavelengths likely requires +the use of a 10m class telescope. For spectral types M6 and earlier, the +combination of a red visual and a near-infrared spectrograph is ideal to search +for low-mass planets and to distinguish between planets and stellar +variability. At a 4m class telescope, an instrument like CARMENES has the +potential to push the RV precision well below the typical jitter level of +3-4ms$^{-1}$. +" +Convolutional Neural Knowledge Graph Learning," Previous models for learning entity and relationship embeddings of knowledge +graphs such as TransE, TransH, and TransR aim to explore new links based on +learned representations. However, these models interpret relationships as +simple translations on entity embeddings. In this paper, we try to learn more +complex connections between entities and relationships. In particular, we use a +Convolutional Neural Network (CNN) to learn entity and relationship +representations in knowledge graphs. In our model, we treat entities and +relationships as one-dimensional numerical sequences with the same length. +After that, we combine each triplet of head, relationship, and tail together as +a matrix with height 3. CNN is applied to the triplets to get confidence +scores. Positive and manually corrupted negative triplets are used to train the +embeddings and the CNN model simultaneously. Experimental results on public +benchmark datasets show that the proposed model outperforms state-of-the-art +models on exploring unseen relationships, which proves that CNN is effective to +learn complex interactive patterns between entities and relationships. +" +Cost-Sensitive Approach to Batch Size Adaptation for Gradient Descent," In this paper, we propose a novel approach to automatically determine the +batch size in stochastic gradient descent methods. The choice of the batch size +induces a trade-off between the accuracy of the gradient estimate and the cost +in terms of samples of each update. We propose to determine the batch size by +optimizing the ratio between a lower bound to a linear or quadratic Taylor +approximation of the expected improvement and the number of samples used to +estimate the gradient. The performance of the proposed approach is empirically +compared with related methods on popular classification tasks. +The work was presented at the NIPS workshop on Optimizing the Optimizers. +Barcelona, Spain, 2016. +" +Existence and uniqueness of solution for a nonhomogeneous nonlocal problem," In this paper we investigate a class of elliptic problems involving a +nonlocal Kirchhoff type operator with variable coefficients and data changing +its sign. Under appropriated conditions on the coefficients, we have shown +existence and uniqueness of solution. +" +Large deviations theory approach to cosmic shear calculations: the one-point aperture mass," This paper presents a general formalism that allows the derivation of the +cumulant generating function and one-point Probability Distribution Function +(PDF) of the aperture mass ($\hat{M}_{ap}$), a common observable for cosmic +shear observations. Our formalism is based on the Large Deviation Principle +(LDP) applied, in such cosmological context, to an arbitrary set of densities +in concentric cells. We show here that the LDP can indeed be used for a much +larger family of observables than previously envisioned, such as those built +from continuous and nonlinear functionals of the density profiles. The general +expression of the observable aperture mass depends on reduced shear profile +making it a rather involved function of the projected density field. Because of +this difficulty, an approximation that is commonly employed consists in +replacing the reduced shear by the shear in such a construction neglecting +therefore non-linear effects. We were precisely able to quantify how this +approximation affects the $\hat{M}_{ap}$ statistical properties. In particular +we derive the corrective term for the skewness of the $\hat{M}_{ap}$ and +reconstruct its one-point PDF. +" +"Band Structure, Band Offsets, Substitutional Doping, and Schottky Barriers in InSe"," We present a comprehensive study of the electronic structure of the layered +semiconductor InSe using density functional theory. We calculate the band +structure of the monolayer and bulk material with the band gap corrected using +hybrid functionals. The band gap of the monolayer is 2.4 eV. The band edge +states are surprising isotropic. The electron affinities and band offsets are +then calculated for heterostructures as would be used in tunnel field effect +transistors (TFETs). The ionization potential of InSe is quite large, similar +to that of HfSe2 or SnSe2, and so InSe is suitable to act as the drain in the +TFET. The intrinsic defects are then calculated. For Se-rich layers, the Se +adatom is the lowest energy defect, whereas for In-rich layers, the In adatom +is most stable for Fermi energies across most of the gap. Both substitutional +donors and acceptors are calculated to be shallow, and not reconstructed. +Finally, the Schottky barriers of metals are found to be strongly pinned, with +the Fermi level pinned by metal induced gap states about 0.5 eV above the +valence band edge. +" +Scalable on-chip quantum state tomography," Quantum information systems are on a path to vastly exceed the complexity of +any classical device. The number of entangled qubits in quantum devices is +rapidly increasing and the information required to fully describe these systems +scales exponentially with qubit number. This scaling is the key benefit of +quantum systems, however it also presents a severe challenge. To characterize +such systems typically requires an exponentially long sequence of different +measurements, becoming highly resource demanding for large numbers of qubits. +Here we propose a novel and scalable method to characterize quantum systems, +where the complexity of the measurement process only scales linearly with the +number of qubits. We experimentally demonstrate an integrated photonic chip +capable of measuring two- and three-photon quantum states with reconstruction +fidelity of 99.67%. +" +Graph Partition Neural Networks for Semi-Supervised Classification," We present graph partition neural networks (GPNN), an extension of graph +neural networks (GNNs) able to handle extremely large graphs. GPNNs alternate +between locally propagating information between nodes in small subgraphs and +globally propagating information between the subgraphs. To efficiently +partition graphs, we experiment with several partitioning algorithms and also +propose a novel variant for fast processing of large scale graphs. We +extensively test our model on a variety of semi-supervised node classification +tasks. Experimental results indicate that GPNNs are either superior or +comparable to state-of-the-art methods on a wide variety of datasets for +graph-based semi-supervised classification. We also show that GPNNs can achieve +similar performance as standard GNNs with fewer propagation steps. +" +Nonparametric competing risks analysis using Bayesian Additive Regression Trees (BART)," Many time-to-event studies are complicated by the presence of competing +risks. Such data are often analyzed using Cox models for the cause specific +hazard function or Fine-Gray models for the subdistribution hazard. In practice +regression relationships in competing risks data with either strategy are often +complex and may include nonlinear functions of covariates, interactions, +high-dimensional parameter spaces and nonproportional cause specific or +subdistribution hazards. Model misspecification can lead to poor predictive +performance. To address these issues, we propose a novel approach to flexible +prediction modeling of competing risks data using Bayesian Additive Regression +Trees (BART). We study the simulation performance in two-sample scenarios as +well as a complex regression setting, and benchmark its performance against +standard regression techniques as well as random survival forests. We +illustrate the use of the proposed method on a recently published study of +patients undergoing hematopoietic stem cell transplantation. +" +Performance Bounds For Co-/Sparse Box Constrained Signal Recovery," The recovery of structured signals from a few linear measurements is a +central point in both compressed sensing (CS) and discrete tomography. In CS +the signal structure is described by means of a low complexity model e.g. +co-/sparsity. The CS theory shows that any signal/image can be undersampled at +a rate dependent on its intrinsic complexity. Moreover, in such undersampling +regimes, the signal can be recovered by sparsity promoting convex +regularization like $\ell_1$- or total variation (TV-) minimization. Precise +relations between many low complexity measures and the sufficient number of +random measurements are known for many sparsity promoting norms. However, a +precise estimate of the undersampling rate for the TV seminorm is still +lacking. We address this issue by: a) providing dual certificates testing +uniqueness of a given cosparse signal with bounded signal values, b) +approximating the undersampling rates via the statistical dimension of the TV +descent cone and c) showing empirically that the provided rates also hold for +tomographic measurements. +" +On the proper treatment of improper distributions," The axiomatic foundation of probability theory presented by Kolmogorov has +been the basis of modern theory for probability and statistics. In certain +applications it is, however, necessary or convenient to allow improper +(unbounded) distributions, which is often done without a theoretical +foundation. The paper reviews a recent theory which includes improper +distributions, and which is related to Renyi's theory of conditional +probability spaces. It is in particular demonstrated how the theory leads to +simple explanations of apparent paradoxes known from the Bayesian literature. +Several examples from statistical practice with improper distributions are +discussed in light of the given theoretical results, which also include a +recent theory of convergence of proper distributions to improper ones. +" +Traces on reduced group C*-algebras," In this short note we prove that the reduced group C*-algebra of a locally +compact group admits a non-zero trace if and only if the amenable radical of +the group is open. This completely answers a question raised by Forrest, Spronk +and Wiersma. +" +Rate of Convergence of General Phase Field Equations towards their Homogenized Limit," Over the last few decades, phase-field equations have found increasing +applicability in a wide range of mathematical-scientific fields (e.g. geometric +PDEs and mean curvature flow, materials science for the study of phase +transitions) but also engineering ones (e.g. as a computational tool in +chemical engineering for interfacial flow studies). Here, we focus on +phase-field equations in strongly heterogeneous materials with perforations +such as porous media. To the best of our knowledge, we provide the first +derivation of error estimates for fourth order, homogenized, and nonlinear +evolution equations. Our fourth order problem induces a slightly lower +convergence rate, i.e., $\epsilon^{1/4}$, where $\epsilon$ denotes the +material's specific heterogeneity, than established for second-order elliptic +problems (e.g. \cite{Zhikov2006}) for the error between the effective +macroscopic solution of the (new) upscaled formulation and the solution of the +microscopic phase field problem. We hope that our study will motivate new +modelling, analytic, and computational perspectives for interfacial transport +and phase transformations in strongly heterogeneous environments. +" +Spin-wave excitations in the SDW state of iron pnictides: a comparison between the roles of interaction parameters," We investigate the role of Hund's coupling in the spin-wave excitations of +the ($\pi, 0$) ordered magnetic state within a five-orbital tight-binding model +for iron pnictides. To differentiate between the roles of intraorbital Coulomb +interaction and Hund's coupling, we focus on the self-consistently obtained +mean-field SDW state with a fixed magnetic moment obtained by using different +sets of interaction parameters. We find that the Hund's coupling is crucial for +the description of various experimentally observed characteristics of the +spin-wave excitations including the anisotropy, energy-dependent behavior, and +spin-wave spectral weight distribution. +" +Artificial Neural Networks that Learn to Satisfy Logic Constraints," Logic-based problems such as planning, theorem proving, or puzzles, typically +involve combinatoric search and structured knowledge representation. Artificial +neural networks are very successful statistical learners, however, for many +years, they have been criticized for their weaknesses in representing and in +processing complex structured knowledge which is crucial for combinatoric +search and symbol manipulation. Two neural architectures are presented, which +can encode structured relational knowledge in neural activation, and store +bounded First Order Logic constraints in connection weights. Both architectures +learn to search for a solution that satisfies the constraints. Learning is done +by unsupervised practicing on problem instances from the same domain, in a way +that improves the network-solving speed. No teacher exists to provide answers +for the problem instances of the training and test sets. However, the domain +constraints are provided as prior knowledge to a loss function that measures +the degree of constraint violations. Iterations of activation calculation and +learning are executed until a solution that maximally satisfies the constraints +emerges on the output units. As a test case, block-world planning problems are +used to train networks that learn to plan in that domain, but the techniques +proposed could be used more generally as in integrating prior symbolic +knowledge with statistical learning +" +Evaluating gender portrayal in Bangladeshi TV," Computer Vision and machine learning methods were previously used to reveal +screen presence of genders in TV and movies. In this work, using head pose, +gender detection, and skin color estimation techniques, we demonstrate that the +gender disparity in TV in a South Asian country such as Bangladesh exhibits +unique characteristics and is sometimes counter-intuitive to popular +perception. We demonstrate a noticeable discrepancy in female screen presence +in Bangladeshi TV advertisements and political talk shows. Further, contrary to +popular hypotheses, we demonstrate that lighter-toned skin colors are less +prevalent than darker complexions, and additionally, quantifiable body language +markers do not provide conclusive insights about gender dynamics. Overall, +these gender portrayal parameters reveal the different layers of onscreen +gender politics and can help direct incentives to address existing disparities +in a nuanced and targeted manner. +" +A Distributional Perspective on Reinforcement Learning," In this paper we argue for the fundamental importance of the value +distribution: the distribution of the random return received by a reinforcement +learning agent. This is in contrast to the common approach to reinforcement +learning which models the expectation of this return, or value. Although there +is an established body of literature studying the value distribution, thus far +it has always been used for a specific purpose such as implementing risk-aware +behaviour. We begin with theoretical results in both the policy evaluation and +control settings, exposing a significant distributional instability in the +latter. We then use the distributional perspective to design a new algorithm +which applies Bellman's equation to the learning of approximate value +distributions. We evaluate our algorithm using the suite of games from the +Arcade Learning Environment. We obtain both state-of-the-art results and +anecdotal evidence demonstrating the importance of the value distribution in +approximate reinforcement learning. Finally, we combine theoretical and +empirical evidence to highlight the ways in which the value distribution +impacts learning in the approximate setting. +" +Computing Individual Risks based on Family History in Genetic Disease in the Presence of Competing Risks," When considering a genetic disease with variable age at onset (ex: diabetes , +familial amyloid neuropathy, cancers, etc.), computing the individual risk of +the disease based on family history (FH) is of critical interest both for +clinicians and patients. Such a risk is very challenging to compute because: 1) +the genotype X of the individual of interest is in general unknown; 2) the +posterior distribution P(X|FH, T > t) changes with t (T is the age at disease +onset for the targeted individual); 3) the competing risk of death is not +negligible. In this work, we present a modeling of this problem using a +Bayesian network mixed with (right-censored) survival outcomes where hazard +rates only depend on the genotype of each individual. We explain how belief +propagation can be used to obtain posterior distribution of genotypes given the +FH, and how to obtain a time-dependent posterior hazard rate for any individual +in the pedigree. Finally, we use this posterior hazard rate to compute +individual risk, with or without the competing risk of death. Our method is +illustrated using the Claus-Easton model for breast cancer (BC). This model +assumes an autosomal dominant genetic risk factor such as non-carriers +(genotype 00) have a BC hazard rate $\lambda$ 0 (t) while carriers (genotypes +01, 10 and 11) have a (much greater) hazard rate $\lambda$ 1 (t). Both hazard +rates are assumed to be piecewise constant with known values (cuts at 20, 30,. +.. , 80 years). The competing risk of death is derived from the national French +registry. +" +The Impact of Feature Selection on Predicting the Number of Bugs," Bug prediction is the process of training a machine learning model on +software metrics and fault information to predict bugs in software entities. +While feature selection is an important step in building a robust prediction +model, there is insufficient evidence about its impact on predicting the number +of bugs in software systems. We study the impact of both correlation-based +feature selection (CFS) filter methods and wrapper feature selection methods on +five widely-used prediction models and demonstrate how these models perform +with or without feature selection to predict the number of bugs in five +different open source Java software systems. Our results show that wrappers +outperform the CFS filter; they improve prediction accuracy by up to 33% while +eliminating more than half of the features. We also observe that though the +same feature selection method chooses different feature subsets in different +projects, this subset always contains a mix of source code and change metrics. +" +Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks," An important goal of computer vision is to build systems that learn visual +representations over time that can be applied to many tasks. In this paper, we +investigate a vision-language embedding as a core representation and show that +it leads to better cross-task transfer than standard multi-task learning. In +particular, the task of visual recognition is aligned to the task of visual +question answering by forcing each to use the same word-region embeddings. We +show this leads to greater inductive transfer from recognition to VQA than +standard multitask learning. Visual recognition also improves, especially for +categories that have relatively few recognition training labels but appear +often in the VQA setting. Thus, our paper takes a small step towards creating +more general vision systems by showing the benefit of interpretable, flexible, +and trainable core representations. +" +On the Dynamical Stability and Instability of Parker Problem," We investigate a perturbation problem for the three-dimensional compressible +isentropic viscous magnetohydrodynamic system with zero resistivity in the +presence of a modified gravitational force in a vertical strip domain in which +the velocity of the fluid is non-slip on the boundary, and focus on the +stabilizing effect of the (equilibrium) magnetic field through the non-slip +boundary condition. We show that there is a discriminant $\Xi$, depending on +the known physical parameters, for the stability/instability of the +perturbation problem. More precisely, if $\Xi<0$, then the perturbation problem +is unstable, i.e., the Parker instability occurs, while if $\Xi>0$ and the +initial perturbation satisfies some relations, then there exists a global +(perturbation) solution which decays algebraically to zero in time, i.e., the +Parker instability does not happen. The stability results in this paper reveal +the stabilizing effect of the magnetic field through the non-slip boundary +condition and the importance of boundary conditions upon the Parker +instability, and demonstrate that a sufficiently strong magnetic field can +prevent the Parker instability from occurring. In addition, based on the +instability results, we further rigorously verify the Parker instability under +Schwarzschild's or Tserkovnikov's instability conditions in the sense of +Hadamard for a horizontally periodic domain. +" +Characterizing Transgender Health Issues in Twitter," Although there are millions of transgender people in the world, a lack of +information exists about their health issues. This issue has consequences for +the medical field, which only has a nascent understanding of how to identify +and meet this population's health-related needs. Social media sites like +Twitter provide new opportunities for transgender people to overcome these +barriers by sharing their personal health experiences. Our research employs a +computational framework to collect tweets from self-identified transgender +users, detect those that are health-related, and identify their information +needs. This framework is significant because it provides a macro-scale +perspective on an issue that lacks investigation at national or demographic +levels. Our findings identified 54 distinct health-related topics that we +grouped into 7 broader categories. Further, we found both linguistic and +topical differences in the health-related information shared by transgender men +(TM) as com-pared to transgender women (TW). These findings can help inform +medical and policy-based strategies for health interventions within transgender +communities. Also, our proposed approach can inform the development of +computational strategies to identify the health-related information needs of +other marginalized populations. +" +CR-Analogue of Siu-$\partial\bar{\partial}$-formula and Applications to Rigidity problem for pseudo-Hermitian harmonic maps," We give several versions of Siu's $\partial\bar{\partial}$-formula for maps +from a strictly pseudoconvex pseudo-Hermitian manifold $(M^{2m+1}, \theta)$ +into a Kähler manifold $(N^n, g)$. We also define and study the notion of +pseudo-Hermitian harmonicity for maps from $M$ into $N$. In particular, we +prove a CR version of Siu Rigidity Theorem for pseudo-Hermitian harmonic maps +from a pseudo-Hermitian manifold with vanishing Webster torsion into a Kähler +manifold having strongly negative curvature. +" +Speculations on homological mirror symmetry for hypersurfaces in $(\mathbb{C}^*)^n$," Given an algebraic hypersurface $H=f^{-1}(0)$ in $(\mathbb{C}^*)^n$, +homological mirror symmetry relates the wrapped Fukaya category of $H$ to the +derived category of singularities of the mirror Landau-Ginzburg model. We +propose an enriched version of this picture which also features the wrapped +Fukaya category of the complement $(\mathbb{C}^*)^n\setminus H$ and the +Fukaya-Seidel category of the Landau-Ginzburg model $((\mathbb{C}^*)^n,f)$. We +illustrate our speculations on simple examples, and sketch a proof of +homological mirror symmetry for higher-dimensional pairs of pants. +" +Constant Step Size Stochastic Gradient Descent for Probabilistic Modeling," Stochastic gradient methods enable learning probabilistic models from large +amounts of data. While large step-sizes (learning rates) have shown to be best +for least-squares (e.g., Gaussian noise) once combined with parameter +averaging, these are not leading to convergent algorithms in general. In this +paper, we consider generalized linear models, that is, conditional models based +on exponential families. We propose averaging moment parameters instead of +natural parameters for constant-step-size stochastic gradient descent. For +finite-dimensional models, we show that this can sometimes (and surprisingly) +lead to better predictions than the best linear model. For infinite-dimensional +models, we show that it always converges to optimal predictions, while +averaging natural parameters never does. We illustrate our findings with +simulations on synthetic data and classical benchmarks with many observations. +" +Virtual and welded periods of classical knots," We show that any virtual or welded period of a classical knot $K$ can be +realized as a classical period. A direct consequence is that a classical knot +admits only finitely many virtual or welded periods. +" +Optimal hybrid block bootstrap for sample quantiles under weak dependence," We establish a general theory of optimality for block bootstrap distribution +estimation for sample quantiles under a mild strong mixing assumption. In +contrast to existing results, we study the block bootstrap for varying numbers +of blocks. This corresponds to a hybrid between the subsampling bootstrap and +the moving block bootstrap (MBB), in which the number of blocks is somewhere +between 1 and the ratio of sample size to block length. Our main theorem +determines the optimal choice of the number of blocks and block length to +achieve the best possible convergence rate for the block bootstrap distribution +estimator for sample quantiles. As part of our analysis, we also prove an +important lemma which gives the convergence rate of the block bootstrap +distribution estimator, with implications even for the smooth function model. +We propose an intuitive procedure for empirical selection of the optimal number +and length of blocks. Relevant examples are presented which illustrate the +benefits of optimally choosing the number of blocks. +" +Modeling Popularity in Asynchronous Social Media Streams with Recurrent Neural Networks," Understanding and predicting the popularity of online items is an important +open problem in social media analysis. Considerable progress has been made +recently in data-driven predictions, and in linking popularity to external +promotions. However, the existing methods typically focus on a single source of +external influence, whereas for many types of online content such as YouTube +videos or news articles, attention is driven by multiple heterogeneous sources +simultaneously - e.g. microblogs or traditional media coverage. Here, we +propose RNN-MAS, a recurrent neural network for modeling asynchronous streams. +It is a sequence generator that connects multiple streams of different +granularity via joint inference. We show RNN-MAS not only to outperform the +current state-of-the-art Youtube popularity prediction system by 17%, but also +to capture complex dynamics, such as seasonal trends of unseen influence. We +define two new metrics: promotion score quantifies the gain in popularity from +one unit of promotion for a Youtube video; the loudness level captures the +effects of a particular user tweeting about the video. We use the loudness +level to compare the effects of a video being promoted by a single +highly-followed user (in the top 1% most followed users) against being promoted +by a group of mid-followed users. We find that results depend on the type of +content being promoted: superusers are more successful in promoting Howto and +Gaming videos, whereas the cohort of regular users are more influential for +Activism videos. This work provides more accurate and explainable popularity +predictions, as well as computational tools for content producers and marketers +to allocate resources for promotion campaigns. +" +A moduli stack of tropical curves," We contribute to the foundations of tropical geometry with a view towards +formulating tropical moduli problems, and with the moduli space of curves as +our main example. We propose a moduli functor for the moduli space of curves +and show that it is representable by a geometric stack over the category of +rational polyhedral cones. In this framework the natural forgetful morphisms +between moduli spaces of curves with marked points function as universal +curves. Our approach to tropical geometry permits tropical moduli +problems---moduli of curves or otherwise---to be extended to logarithmic +schemes. We use this to construct a smooth tropicalization morphism from the +moduli space of algebraic curves to the moduli space of tropical curves, and we +show that this morphism commutes with all of the tautological morphisms. +" +Kernel entropy estimation for linear processes," Let $\{X_n: n\in \mathbb{N}\}$ be a linear process with bounded probability +density function $f(x)$. We study the estimation of the quadratic functional +$\int_{\mathbb{R}} f^2(x)\, dx$. With a Fourier transform on the kernel +function and the projection method, it is shown that, under certain mild +conditions, the estimator \[ \frac{2}{n(n-1)h_n} \sum_{1\le i0, \alpha\in (2, 3)$, +$\sup_{\mathbb{R}^N}|\nabla u_0|<\infty$, and the norm of $u_0$ in the space +$L^{\frac{(\alpha-1)N}{3-\alpha}}(\mathbb{R}^N) $ is sufficiently small. This +is done by exploring various properties of the biharmonic heat kernel. In the +initial boundary value problem, we assume that ${\bf g}$ is continuous and +satisfies the growth condition $|{\bf g}(\xi) |\leq c|\xi|^\alpha+c$ for some +$c, \alpha\in (0,\infty)$. Our investigations reveal that if $\alpha\leq 1$ we +have global existence of a weak solution, while if +$1<\alpha<\frac{N^2+2N+4}{N^2}$ only a local existence theorem can be +established. Our method here is based upon a new interpolation inequality, +which may be of interest in its own right. +" +Stochastic Gradient Descent for Relational Logistic Regression via Partial Network Crawls," Research in statistical relational learning has produced a number of methods +for learning relational models from large-scale network data. While these +methods have been successfully applied in various domains, they have been +developed under the unrealistic assumption of full data access. In practice, +however, the data are often collected by crawling the network, due to +proprietary access, limited resources, and privacy concerns. Recently, we +showed that the parameter estimates for relational Bayes classifiers computed +from network samples collected by existing network crawlers can be quite +inaccurate, and developed a crawl-aware estimation method for such models +(Yang, Ribeiro, and Neville, 2017). In this work, we extend the methodology to +learning relational logistic regression models via stochastic gradient descent +from partial network crawls, and show that the proposed method yields accurate +parameter estimates and confidence intervals. +" +Social Discrete Choice Models," Human decision making underlies data generating process in multiple +application areas, and models explaining and predicting choices made by +individuals are in high demand. Discrete choice models are widely studied in +economics and computational social sciences. As digital social networking +facilitates information flow and spread of influence between individuals, new +advances in modeling are needed to incorporate social information into these +models in addition to characteristic features affecting individual choices. In +this paper, we propose two novel models with scalable training algorithms: +local logistics graph regularization (LLGR) and latent class graph +regularization (LCGR) models. We add social regularization to represent +similarity between friends, and we introduce latent classes to account for +possible preference discrepancies between different social groups. Training of +the LLGR model is performed using alternating direction method of multipliers +(ADMM), and training of the LCGR model is performed using a specialized Monte +Carlo expectation maximization (MCEM) algorithm. Scalability to large graphs is +achieved by parallelizing computation in both the expectation and the +maximization steps. The LCGR model is the first latent class classification +model that incorporates social relationships among individuals represented by a +given graph. To evaluate our two models, we consider three classes of data to +illustrate a typical large-scale use case in internet and social media +applications. We experiment on synthetic datasets to empirically explain when +the proposed model is better than vanilla classification models that do not +exploit graph structure. We also experiment on real-world data, including both +small scale and large scale real-world datasets, to demonstrate on which types +of datasets our model can be expected to outperform state-of-the-art models. +" +Operator spreading and the emergence of dissipative hydrodynamics under unitary evolution with conservation laws," We study the scrambling of local quantum information in chaotic many-body +systems in the presence of a locally conserved quantity like charge or energy +that moves diffusively. The interplay between conservation laws and scrambling +sheds light on the mechanism by which unitary quantum dynamics, which is +reversible, gives rise to diffusive hydrodynamics, which is a dissipative +process. We obtain our results in a random quantum circuit model that is +constrained to have a conservation law. We find that a generic spreading +operator consists of two parts: (i) a conserved part which comprises the weight +of the spreading operator on the local conserved densities, whose dynamics is +described by diffusive charge spreading. This conserved part also acts as a +source that steadily emits a flux of (ii) non-conserved operators. This +emission leads to dissipation in the operator hydrodynamics, with the +dissipative process being the conversion of operator weight from local +conserved operators to nonconserved, at a rate set by the local diffusion +current. The emitted nonconserved parts then spread ballistically at a +butterfly speed, thus becoming highly nonlocal and hence essentially +non-observable, thereby acting as the ""reservoir"" that facilitates the +dissipation. In addition, we find that the nonconserved component develops a +power law tail behind its leading ballistic front due to the slow dynamics of +the conserved components. This implies that the out-of-time-order commutator +(OTOC) between two initially separated operators grows sharply upon the arrival +of the ballistic front but, in contrast to systems with no conservation laws, +it develops a diffusive tail and approaches its asymptotic late-time value only +as a power of time instead of exponentially. We also derive these results +within an effective hydrodynamic description which contains multiple coupled +diffusion equations. +" +Performance of New High-Precision Muon Tracking Detectors for the ATLAS Experiment," The goals of the ongoing and planned ATLAS muon detector upgrades are to +increase the acceptance for precision muon momentum measurement and triggering +and to improve the rate capability of the muon chambers in the high-background +regions corresponding to the increasing LHC luminosity. Smalldiameter Muon +Drift Tube (sMDT) chambers have been developed for these purposes. With half +the drift-tube diameter of the current ATLAS Muon Drift Tube (MDT) chambers +with 30 mm drift tube diameter and otherwise unchanged operating parameters, +the sMDT chambers share all the advantages of the MDTs, but have an about an +order of magnitude higher rate capability and can be installed in detector +regions where MDT chambers do not fit in. The construction of twelve chambers +for the feet regions of the ATLAS detector has been completed for the +installation in the winter shutdown 2016/17 of the Large Hadron Collider. The +purpose of this upgrade of the ATLAS muon spectrometer is to increase the +acceptance for three-point muon track measurement which substantially improves +the muon momentum resolution in the regions concerned. +" +Exact electrodynamics versus standard optics for a slab of cold dense gas," We study light propagation through a slab of cold gas using both the standard +electrodynamics of polarizable media, and massive atom-by-atom simulations of +the electrodynamics. The main finding is that the predictions from the two +methods may differ qualitatively when the density of the atomic sample $\rho$ +and the wavenumber of resonant light $k$ satisfy $\rho k^{-3}\gtrsim 1$. The +reason is that the standard electrodynamics is a mean-field theory, whereas for +sufficiently strong light-mediated dipole-dipole interactions the atomic sample +becomes correlated. The deviations from mean-field theory appear to scale with +the parameter $\rho k^{-3}$, and we demonstrate noticeable effects already at +$\rho k^{-3} \simeq 10^{-2}$. In dilute gases and in gases with an added +inhomogeneous broadening the simulations show shifts of the resonance lines in +qualitative agreement with the predicted Lorentz-Lorenz shift and ""cooperative +Lamb shift"", but the quantitative agreement is unsatisfactory. Our +interpretation is that the microscopic basis for the local-field corrections in +electrodynamics is not fully understood. +" +Sparse Grid Discretizations based on a Discontinuous Galerkin Method," We examine and extend Sparse Grids as a discretization method for partial +differential equations (PDEs). Solving a PDE in $D$ dimensions has a cost that +grows as $O(N^D)$ with commonly used methods. Even for moderate $D$ (e.g. +$D=3$), this quickly becomes prohibitively expensive for increasing problem +size $N$. This effect is known as the Curse of Dimensionality. Sparse Grids +offer an alternative discretization method with a much smaller cost of $O(N +\log^{D-1}N)$. In this paper, we introduce the reader to Sparse Grids, and +extend the method via a Discontinuous Galerkin approach. We then solve the +scalar wave equation in up to $6+1$ dimensions, comparing cost and accuracy +between full and sparse grids. Sparse Grids perform far superior, even in three +dimensions. Our code is freely available as open source, and we encourage the +reader to reproduce the results we show. +" +Study of the one-way speed of light anisotropy with particle beams," Concepts of high precision studies of the one-way speed of light anisotropy +are discussed. The high energy particle beam allows measurement of a one-way +speed of light anisotropy (SOLA) via analysis of the beam momentum variation +with sidereal phase without the use of synchronized clocks. High precision beam +position monitors could provide accurate monitoring of the beam orbit and +determination of the particle beam momentum with relative accuracy on the level +of $10^{-10}$, which corresponds to a limit on SOLA of $10^{-18}$ with existing +storage rings. A few additional versions of the experiment are also presented. +" +Three-dimensional radiation dosimetry based on optically-stimulated luminescence," A new approach to three-dimensional (3D) dosimetry based on +optically-stimulated luminescence (OSL) is presented. By embedding OSL-active +particles into a transparent silicone matrix (PDMS), the well-established +dosimetric properties of an OSL material are exploited in a 3D-OSL dosimeter. +By investigating prototype dosimeters in standard cuvettes in combination with +small test samples for OSL readers, it is shown that a sufficient transparency +of the 3D-OSL material can be combined with an OSL response giving an estimated +>10.000 detected photons in 1 second per 1mm3 voxel of the dosimeter at a dose +of 1 Gy. The dose distribution in the 3D-OSL dosimeters can be directly read +out optically without the need for subsequent reconstruction by computational +inversion algorithms. The dosimeters carry the advantages known from +personal-dosimetry use of OSL: the dose distribution following irradiation can +be stored with minimal fading for extended periods of time, and dosimeters are +reusable as they can be reset, e.g. by an intense (bleaching) light field. +" +Stochastic Neighbor Embedding separates well-separated clusters," Stochastic Neighbor Embedding and its variants are widely used dimensionality +reduction techniques -- despite their popularity, no theoretical results are +known. We prove that the optimal SNE embedding of well-separated clusters from +high dimensions to any Euclidean space R^d manages to successfully separate the +clusters in a quantitative way. The result also applies to a larger family of +methods including a variant of t-SNE. +" +Redshifts for galaxies in radio continuum surveys from Bayesian model fitting of HI 21-cm lines," We introduce a new Bayesian HI spectral line fitting technique capable of +obtaining spectroscopic redshifts for millions of galaxies in radio surveys +with the Square Kilometere Array (SKA). This technique is especially +well-suited to the low signal-to-noise regime that the redshifted 21-cm HI +emission line is expected to be observed in, especially with SKA Phase 1, +allowing for robust source detection. After selecting a set of continuum +objects relevant to large, cosmological-scale surveys with the first phase of +the SKA dish array (SKA1-MID), we simulate data corresponding to their HI line +emission as observed by the same telescope. We then use the MultiNest nested +sampling code to find the best-fitting parametrised line profile, providing us +with a full joint posterior probability distribution for the galaxy properties, +including redshift. This provides high quality redshifts, with redshift errors +$\Delta z / z <10^{-5}$, from radio data alone for some 1.8 million galaxies in +a representative 5000 square degree survey with the SKA1-MID instrument with +up-to-date sensitivity profiles. Interestingly, we find that the SNR definition +commonly used in forecast papers does not correlate well with the actual +detectability of an HI line using our method. We further detail how our method +could be improved with per-object priors and how it may be also used to give +robust constraints on other observables such as the HI mass function. We also +make our line fitting code publicly available for application to other data +sets. +" +New upper bounds for Ramanujan primes," For $n\ge 1$, the $n^{\rm th}$ Ramanujan prime is defined as the smallest +positive integer $R_n$ such that for all $x\ge R_n$, the interval +$(\frac{x}{2}, x]$ has at least $n$ primes. We show that for every +$\epsilon>0$, there is a positive integer $N$ such that if +$\alpha=2n\left(1+\dfrac{\log 2+\epsilon}{\log n+j(n)}\right)$, then $R_n< +p_{[\alpha]}$ for all $n>N$, where $p_i$ is the $i^{\rm th}$ prime and $j(n)>0$ +is any function that satisfies $j(n)\to \infty$ and $nj'(n)\to 0$. +" +Thermal phases of correlated lattice boson: a classical fluctuation theory," We present a method that generalises the standard mean field theory of +correlated lattice bosons to include amplitude and phase fluctuations of the +$U(1)$ field that induces onsite particle number mixing. This arises formally +from an auxiliary field decomposition of the kinetic term in a Bose Hubbard +model. We solve the resulting problem, initially, by using a classical +approximation for the particle number mixing field and a Monte Carlo treatment +of the resulting bosonic model. In two dimensions we obtain $T_c$ scales that +dramatically improve on mean field theory and are within about 20% of full +quantum Monte Carlo estimates. The `classical approximation' ground state, +however, is still mean field, with an overestimate of the critical interaction, +$U_c$, for the superfluid to Mott transition. By further including low order +quantum fluctuations in the free energy functional we improve significantly on +the $U_c$, and the overall thermal phase diagram. The classical approximation +based method has a computational cost linear in system size. The methods +readily generalise to multispecies bosons and the presence of traps. +" +Analytic Center Cutting Plane Methods for Variational Inequalities over Convex Bodies," An analytic center cutting plane method is an iterative algorithm based on +the computation of analytic centers. In this paper, we propose some analytic +center cutting plane methods for solving quasimonotone or pseudomonotone +variational inequalities whose domains are bounded or unbounded convex bodies. +" +Does Viscosity turn inflation into the CMB and $Λ$," Consideration of the entropy production in the creation of the CMB leads to a +simple model of the evolution of the universe during this period which suggests +a connection between the small observed acceleration term and the early +inflation of a closed universe. From this we find an unexpected relationship +between the Omega's of cosmology and calculate the total volume of the +universe. +" +The Secrecy Capacity of Gaussian MIMO Channels with Finite Memory - Full Version," In this work we study the secrecy capacity of Gaussian multiple-input +multiple-output (MIMO) wiretap channels (WTCs) with a finite memory, subject to +a per-symbol average power constraint on the MIMO channel input. MIMO channels +with finite memory are very common in wireless communications as well as in +wireline communications (e.g., in communications over power lines). To derive +the secrecy capacity of the Gaussian MIMO WTC with finite memory we first +construct an asymptotically-equivalent block-memoryless MIMO WTC, which is then +transformed into a set of parallel, independent, memoryless MIMO WTCs in the +frequency domain. The secrecy capacity of the Gaussian MIMO WTC with finite +memory is obtained as the secrecy capacity of the set of parallel independent +memoryless MIMO WTCs, and is expressed as a maximization over the input +covariance matrices in the frequency domain. Lastly, we detail two applications +of our result: First, we show that the secrecy capacity of the Gaussian scalar +WTC with finite memory can be achieved by waterfilling, and obtain a +closed-form expression for this secrecy capacity. Then, we use our result to +characterize the secrecy capacity of narrowband powerline channels, thereby +resolving one of the major open issues for this channel model. +" +"On the Algebro-Geometric Analysis of Meromorphic (1,0)-forms"," In this paper, we analyze the theory of meromorphic $(1,0)$-forms +$\omega\in\mathcal{M}\Omega^{(1,0)}(\mathbb{CP}^1).$ Hence, we show that on a +compact Riemann surface of genus $g=0,$ isomorphic to $\mathbb{CP}^1,$ every +non-constant meromorphic function $f:X\to\mathbb{CP}^1$ has as many zeros as +poles, where each is counted according to multiplicities. Such an analysis +gives rise to the following result. Invoking the Riemann-Roch theorem for a +compact Riemann $X$ with canonical divisor $K,$ it follows that $deg(f)=0$ for +any principal divisor $(f):=D$ on $X.$ More precisely, +$\ell(D)-\ell(K-D)=deg(D)+1=1$ or $\ell(D)-\ell(K-D)-1=0.$ Furthermore, for a +diffeomorphism $\eta:X\to\mathbb{CP}^1$ of a certain kind, a multistep program +is implemented to show $X$ is a compact algebraic variety of dimension one, +i.e. a non-singular projective variety. Hence, we adopt a group-theoretic +approach and provide a useful heuristic, that is, a set of technical conditions +to facilitate the algebro-geometric analysis of simply connected Riemann +surfaces $X.$ +" +Tool Supported Analysis of IoT," The design of IoT systems could benefit from the combination of two different +analyses. We perform a first analysis to approximate how data flow across the +system components, while the second analysis checks their communication +soundness. We show how the combination of these two analyses yields further +benefits hardly achievable by separately using each of them. We exploit two +independently developed tools for the analyses. +Firstly, we specify IoT systems in IoT-LySa, a simple specification language +featuring asynchronous multicast communication of tuples. The values carried by +the tuples are drawn from a term-algebra obtained by a parametric signature. +The analysis of communication soundness is supported by ChorGram, a tool +developed to verify the compatibility of communicating finite-state machines. +In order to combine the analyses we implement an encoding of IoT-LySa processes +into communicating machines. This encoding is not completely straightforward +because IoT-LySa has multicast communications with data, while communication +machines are based on point-to-point communications where only finitely many +symbols can be exchanged. To highlight the benefits of our approach we appeal +to a simple yet illustrative example. +" +Deep Learning for Ontology Reasoning," In this work, we present a novel approach to ontology reasoning that is based +on deep learning rather than logic-based formal reasoning. To this end, we +introduce a new model for statistical relational learning that is built upon +deep recursive neural networks, and give experimental evidence that it can +easily compete with, or even outperform, existing logic-based reasoners on the +task of ontology reasoning. More precisely, we compared our implemented system +with one of the best logic-based ontology reasoners at present, RDFox, on a +number of large standard benchmark datasets, and found that our system attained +high reasoning quality, while being up to two orders of magnitude faster. +" +Wide applicability of high $T_c$ pairing originating from coexisting wide and incipient narrow bands in quasi one dimension," We study superconductivity in the Hubbard model on various +quasi-one-dimensional lattices with coexisting wide and narrow bands +originating from multiple sites within a unit cell, where each site corresponds +to a single orbital. The systems studied are the 2-leg and 3-leg ladders, the +diamond chain, and the criss-cross ladder. These one-dimensional lattices are +weakly coupled to form two-dimensional (quasi-one-dimensional) ones, and the +fluctuation exchange approximation is adopted to study +spin-fluctuation-mediated superconductivity. When one of the bands is perfectly +flat, and the Fermi level, intersecting the wide band, is placed in the +vicinity of the flat band, superconductivity arising from the interband +scattering processes is found to be strongly enhanced owing to the combination +of the light electron mass of the wide band and the strong pairing interaction +due to the large density of states of the flat band. Even when the narrow band +has finite band width, the pairing mechanism still works since the edge of the +narrow band, due to its large density of states, plays the role of the flat +band. The results indicate the generality of the high $T_c$ pairing mechanism +due to coexisting wide and ""incipient"" narrow bands in quasi-one-dimensional +systems. +" +Analysis and design of Raptor codes using a multi-edge framework," The focus of this paper is on the analysis and design of Raptor codes using a +multi-edge framework. In this regard, we first represent the Raptor code as a +multi-edge type low-density parity-check (METLDPC) code. This MET +representation gives a general framework to analyze and design Raptor codes +over a binary input additive white Gaussian noise channel using MET density +evolution (MET-DE). We consider a joint decoding scheme based on the belief +propagation (BP) decoding for Raptor codes in the multi-edge framework, and +analyze the convergence behavior of the BP decoder using MET-DE. In joint +decoding of Raptor codes, the component codes correspond to inner code and +precode are decoded in parallel and provide information to each other. We also +derive an exact expression for the stability of Raptor codes with joint +decoding. We then propose an efficient Raptor code design method using the +multi-edge framework, where we simultaneously optimize the inner code and the +precode. Finally we consider performance-complexity trade-offs of Raptor codes +using the multi-edge framework. Through density evolution analysis we show that +the designed Raptor codes using the multi-edge framework outperform the +existing Raptor codes in literature in terms of the realized rate. +" +Geometric structures and Lie algebroids," In this thesis we study geometric structures from Poisson and generalized +complex geometry with mild singular behavior using Lie algebroids. The process +of lifting such structures to their Lie algebroid version makes them less +singular, as their singular behavior is incorporated in the anchor of the Lie +algebroid. We develop a framework for this using the concept of a divisor, +which encodes the singularities, and show when structures exhibiting such +singularities can be lifted to a Lie algebroid built out of the divisor. Once +one has successfully lifted the structure, it becomes possible to study it +using more powerful techniques. In the case of Poisson structures one can turn +to employing symplectic techniques. These lead for example to normal form +results for the underlying Poisson structures around their singular loci. In +this thesis we further adapt the methods of Gompf and Thurston for constructing +symplectic structures out of fibration-like maps to their Lie algebroid +counterparts. More precisely, we introduce the notion of a Lie algebroid +Lefschetz fibration and show when these give rise to A-symplectic structures +for a given Lie algebroid A. We then use this general result to show how +log-symplectic structures arise out of achiral Lefschetz fibrations. Moreover, +we introduce the concept of a boundary Lefschetz fibration and show when they +allow their total space to be equipped with a stable generalized complex +structure. Other results in this thesis include homotopical obstructions to the +existence of A-symplectic structures using characteristic classes, and +splitting results for A-Lie algebroids (i.e., Lie algebroids whose anchor +factors through that of a fixed Lie algebroid A), around specific transversal +submanifolds. +" +Financial Trading as a Game: A Deep Reinforcement Learning Approach," An automatic program that generates constant profit from the financial market +is lucrative for every market practitioner. Recent advance in deep +reinforcement learning provides a framework toward end-to-end training of such +trading agent. In this paper, we propose an Markov Decision Process (MDP) model +suitable for the financial trading task and solve it with the state-of-the-art +deep recurrent Q-network (DRQN) algorithm. We propose several modifications to +the existing learning algorithm to make it more suitable under the financial +trading setting, namely 1. We employ a substantially small replay memory (only +a few hundreds in size) compared to ones used in modern deep reinforcement +learning algorithms (often millions in size.) 2. We develop an action +augmentation technique to mitigate the need for random exploration by providing +extra feedback signals for all actions to the agent. This enables us to use +greedy policy over the course of learning and shows strong empirical +performance compared to more commonly used epsilon-greedy exploration. However, +this technique is specific to financial trading under a few market assumptions. +3. We sample a longer sequence for recurrent neural network training. A side +product of this mechanism is that we can now train the agent for every T steps. +This greatly reduces training time since the overall computation is down by a +factor of T. We combine all of the above into a complete online learning +algorithm and validate our approach on the spot foreign exchange market. +" +The Confinement of Vortices in Nano-Superconducting Devices," We have investigated the confinement of 3-D vortices in specific cases of +Type-II ($\kappa = 2$) nano-superconducting devices. The emergent pattern of +vortices greatly depends on the orientation of an applied magnetic field +(transverse or longitudinal), and the size of the devices (a few coherence +lengths $\xi$). Herein, cylindrical geometries are examined. The surface +barriers become very significant in these nano-systems, and hence the +characteristics of the vortices become highly sensitive to the shape of the +system and direction of an applied field. It is observed that nano-cylindrical +superconductors, depending on their sizes, can display either first or second +order phase transitions, under the influence of a longitudinal field. In the +confined geometries, nucleation of a giant vortex state composed of a n-quanta +emerges for the longitudinal magnetic field. +" +Design and Implementation of Modified Fuzzy based CPU Scheduling Algorithm," CPU Scheduling is the base of multiprogramming. Scheduling is a process which +decides order of task from a set of multiple tasks that are ready to execute. +There are number of CPU scheduling algorithms available, but it is very +difficult task to decide which one is better. This paper discusses the design +and implementation of modified fuzzy based CPU scheduling algorithm. This paper +present a new set of fuzzy rules. It demonstrates that scheduling done with new +priority improves average waiting time and average turnaround time. +" +Message-Passing Methods for Complex Contagions," Message-passing methods provide a powerful approach for calculating the +expected size of cascades either on random networks (e.g., drawn from a +configuration-model ensemble or its generalizations) asymptotically as the +number $N$ of nodes becomes infinite or on specific finite-size networks. We +review the message-passing approach and show how to derive it for +configuration-model networks using the methods of (Dhar et al., 1997) and +(Gleeson, 2008). Using this approach, we explain for such networks how to +determine an analytical expression for a ""cascade condition"", which determines +whether a global cascade will occur. We extend this approach to the +message-passing methods for specific finite-size networks (Shrestha and Moore, +2014; Lokhov et al., 2015), and we derive a generalized cascade condition. +Throughout this chapter, we illustrate these ideas using the Watts threshold +model. +" +Phase Transitions of the Typical Algorithmic Complexity of the Random Satisfiability Problem Studied with Linear Programming," Here we study the NP-complete $K$-SAT problem. Although the worst-case +complexity of NP-complete problems is conjectured to be exponential, there +exist parametrized random ensembles of problems where solutions can typically +be found in polynomial time for suitable ranges of the parameter. In fact, +random $K$-SAT, with $\alpha=M/N $ as control parameter, can be solved quickly +for small enough values of $\alpha$. It shows a phase transition between a +satisfiable phase and an unsatisfiable phase. For branch and bound algorithms, +which operate in the space of feasible Boolean configurations, the empirically +hardest problems are located only close to this phase transition. Here we study +$K$-SAT ($K=3,4$) and the related optimization problem MAX-SAT by a linear +programming approach, which is widely used for practical problems and allows +for polynomial run time. In contrast to branch and bound it operates outside +the space of feasible configurations. On the other hand, finding a solution +within polynomial time is not guaranteed. We investigated several variants like +including artificial objective functions, so called cutting-plane approaches, +and a mapping to the NP-complete vertex-cover problem. We observed several +easy-hard transitions, from where the problems are typically solvable (in +polynomial time) using the given algorithms, respectively, to where they are +not solvable in polynomial time. For the related vertex-cover problem on random +graphs these easy-hard transitions can be identified with structural properties +of the graphs, like percolation transitions. For the present random $K$-SAT +problem we have investigated numerous structural properties also exhibiting +clear transitions, but they appear not be correlated to the here observed +easy-hard transitions. This renders the behaviour of random $K$-SAT more +complex than, e.g., the vertex-cover problem. +" +Unconstrained and Curvature-Constrained Shortest-Path Distances and their Approximation," We study shortest paths and their distances on a subset of a Euclidean space, +and their approximation by their equivalents in a neighborhood graph defined on +a sample from that subset. In particular, we recover and extend the results of +Bernstein et al. (2000). We do the same with curvature-constrained shortest +paths and their distances, establishing what we believe are the first +approximation bounds for them. +" +Hecke module structure on first and top pro-$p$-Iwahori cohomology," Let $p\geq 5$ be a prime number, $G$ a split connected reductive group +defined over a $p$-adic field, and $I_1$ a choice of pro-$p$-Iwahori subgroup. +Let $C$ be an algebraically closed field of characteristic $p$ and +$\mathcal{H}$ the pro-$p$-Iwahori--Hecke algebra over $C$ associated to $I_1$. +In this note, we compute the action of $\mathcal{H}$ on $\textrm{H}^1(I_1,C)$ +and $\textrm{H}^{\textrm{top}}(I_1,C)$ when the root system of $G$ is +irreducible. We also give some partial results in the general case. +" +Sensitivity Properties of Intermittent Control," The sensitivity properties of intermittent control are analysed and the +conditions for a limit cycle derived theoretically and verified by simulation. +" +Algebra of distributions of quantum-field densities and space-time properties," In this paper we consider properties of the space-time manifold M caused by +the proposition that, according to the scheme theory, the manifold M is locally +isomorphic to the spectrum of the algebra A, M = Spec(A), where A is the +commutative algebra of distributions of quantum-field densities. In order to +determine the algebra A, it is necessary to define multiplication on densities +and to eliminate those densities, which cannot be multiplied. This leads to +essential restrictions imposed on densities and on space-time properties. It is +found that the only possible case, when the commutative algebra A exists, is +the case, when the quantum fields are in the spacetime manifold M with the +structure group SO(3,1) (Lorentz group). The algebra A consists of +distributions of densities with singularities in the closed future light cone +subset. On account of the local isomorphism M = Spec(A), the quantum fields +exist only in the space-time manifold with the one-dimensional arrow of time. +In the fermion sector the restrictions caused by the possibility to define the +multiplication on the densities of spinor fields can explain the chirality +violation. It is found that for bosons in the Higgs sector the charge +conjugation symmetry violation on the densities of states can be observed. This +symmetry violation can explain the matter-antimatter imbalance. +" +Minimal penalties and the slope heuristics: a survey," Birg{é} and Massart proposed in 2001 the slope heuristics as a way to +choose optimally from data an unknown multiplicative constant in front of a +penalty. It is built upon the notion of minimal penalty, and it has been +generalized since to some 'minimal-penalty algorithms'. This paper reviews the +theoretical results obtained for such algorithms, with a self-contained proof +in the simplest framework, precise proof ideas for further generalizations, and +a few new results. Explicit connections are made with residual-variance +estimators-with an original contribution on this topic, showing that for this +task the slope heuristics performs almost as well as a residual-based estimator +with the best model choice-and some classical algorithms such as L-curve or +elbow heuristics, Mallows' C p , and Akaike's FPE. Practical issues are also +addressed, including two new practical definitions of minimal-penalty +algorithms that are compared on synthetic data to previously-proposed +definitions. Finally, several conjectures and open problems are suggested as +future research directions. +" +Fast and Robust Detection of Fallen People from a Mobile Robot," This paper deals with the problem of detecting fallen people lying on the +floor by means of a mobile robot equipped with a 3D depth sensor. In the +proposed algorithm, inspired by semantic segmentation techniques, the 3D scene +is over-segmented into small patches. Fallen people are then detected by means +of two SVM classifiers: the first one labels each patch, while the second one +captures the spatial relations between them. This novel approach showed to be +robust and fast. Indeed, thanks to the use of small patches, fallen people in +real cluttered scenes with objects side by side are correctly detected. +Moreover, the algorithm can be executed on a mobile robot fitted with a +standard laptop making it possible to exploit the 2D environmental map built by +the robot and the multiple points of view obtained during the robot navigation. +Additionally, this algorithm is robust to illumination changes since it does +not rely on RGB data but on depth data. All the methods have been thoroughly +validated on the IASLAB-RGBD Fallen Person Dataset, which is published online +as a further contribution. It consists of several static and dynamic sequences +with 15 different people and 2 different environments. +" +On residual and guided proposals for diffusion bridge simulation," Recently Whitaker et al. (2017) considered Bayesian estimation of diffusion +driven mixed effects models using data-augmentation. The missing data, +diffusion bridges connecting discrete time observations, are drawn using a +""residual bridge construct"". In this paper we compare this construct (which we +call residual proposal) with the guided proposals introduced in Schauer et al. +2017. It is shown that both approaches are related, but use a different +approximation to the intractable stochastic differential equation of the true +diffusion bridge. It reveals that the computational complexity of both +approaches is similar. Some examples are included to compare the ability of +both proposals to capture local nonlinearities in the dynamics of the true +bridge. +" +CHEERS: The chemical evolution RGS sample," The chemical yields of supernovae and the metal enrichment of the hot +intra-cluster medium (ICM) are not well understood. This paper introduces the +CHEmical Enrichment RGS Sample (CHEERS), which is a sample of 44 bright local +giant ellipticals, groups and clusters of galaxies observed with XMM-Newton. +This paper focuses on the abundance measurements of O and Fe using the +reflection grating spectrometer (RGS). The deep exposures and the size of the +sample allow us to quantify the intrinsic scatter and the systematic +uncertainties in the abundances using spectral modeling techniques. We report +the oxygen and iron abundances as measured with RGS in the core regions of all +objects in the sample. We do not find a significant trend of O/Fe as a function +of cluster temperature, but we do find an intrinsic scatter in the O and Fe +abundances from cluster to cluster. The level of systematic uncertainties in +the O/Fe ratio is estimated to be around 20-30%, while the systematic +uncertainties in the absolute O and Fe abundances can be as high as 50% in +extreme cases. We were able to identify and correct a systematic bias in the +oxygen abundance determination, which was due to an inaccuracy in the spectral +model. The lack of dependence of O/Fe on temperature suggests that the +enrichment of the ICM does not depend on cluster mass and that most of the +enrichment likely took place before the ICM was formed. We find that the +observed scatter in the O/Fe ratio is due to a combination of intrinsic scatter +in the source and systematic uncertainties in the spectral fitting, which we +are unable to disentangle. The astrophysical source of intrinsic scatter could +be due to differences in AGN activity and ongoing star formation in the BCG. +The systematic scatter is due to uncertainties in the spatial line broadening, +absorption column, multi-temperature structure and the thermal plasma models. +(Abbreviated). +" +Learning to Search via Retrospective Imitation," We study the problem of learning a good search policy from demonstrations for +combinatorial search spaces. We propose retrospective imitation learning, +which, after initial training by an expert, improves itself by learning from +its own retrospective solutions. That is, when the policy eventually reaches a +feasible solution in a search tree after making mistakes and backtracks, it +retrospectively constructs an improved search trace to the solution by removing +backtracks, which is then used to further train the policy. A key feature of +our approach is that it can iteratively scale up, or transfer, to larger +problem sizes than the initial expert demonstrations, thus dramatically +expanding its applicability beyond that of conventional imitation learning. We +showcase the effectiveness of our approach on two tasks: synthetic maze +solving, and integer program based risk-aware path planning. +" +Electronic fitness function for screening semiconductors as thermoelectric materials," We introduce a simple but efficient electronic fitness function (EFF) that +describes the electronic aspect of the thermoelectric performance. This EFF +finds materials that overcome the inverse relationship between $\sigma$ and $S$ +based on the complexity of the electronic structures regardless of specific +origin (e.g., isosurface corrugation, valley degeneracy, heavy-light bands +mixture, valley anisotropy or reduced dimensionality). This function is well +suited for application in high throughput screening. We applied this function +to 75 different thermoelectric and potential thermoelectric materials including +full- and half-Heuslers, binary semiconductors and Zintl phases. We find an +efficient screening using this transport function. The EFF identifies known +high performance $p$- and $n$-type Zintl phases and half-Heuslers. In addition, +we find some previously unstudied phases with superior EFF. +" +Density Functional Theory of doped superfluid liquid helium and nanodroplets," During the last decade, density function theory (DFT) in its static and +dynamic time dependent forms, has emerged as a powerful tool to describe the +structure and dynamics of doped liquid helium and droplets. In this review, we +summarize the activity carried out in this field within the DFT framework since +the publication of the previous review article on this subject [M. Barranco et +al., J. Low Temp. Phys. 142, 1 (2006)]. Furthermore, a comprehensive +presentation of the actual implementations of helium DFT is given, which have +not been discussed in the individual articles or are scattered in the existing +literature. This is an Accepted Manuscript of an article published on August 2, +2017 by Taylor & Francis Group in Int. Rev. Phys. Chem. 36, 621 (2017), +available online: this http URL +" +Higher degree S-lemma and the stability of quadratic modules," In this work we will investigate a certain generalization of the so called +S-lemma in higher degrees. The importance of this generalization is, that it is +closely related to Hilbert's 1888 theorem about tenary quartics. In fact, if +such a generalization exits, then one can state a Hilbert-like theorem, where +positivity is only demanded on some semi-algebraic set. We will show that such +a generalization is not possible, at least not without additional conditions. +To prove this, we will use and generalize certain tools developed by Netzer +([Ne]). These new tools will allow us to conclude that this generalization of +the S-lemma is not possible because of geometric reasons. Furthermore, we are +able to establish a link between geometric reasons and algebraic reasons. This +will be accomplished within the framework of quadratic modules. +" +Phantom Domain Walls," We consider a model with two real scalar fields which admits phantom domain +wall solutions. We investigate the structure and evolution of these phantom +domain walls in an expanding homogeneous and isotropic universe. In particular, +we show that the increase of the tension of the domain walls with cosmic time, +associated to the evolution of the phantom scalar field, is responsible for an +additional damping term in their equations of motion. We describe the +macroscopic dynamics of phantom domain walls, showing that extended phantom +defects whose tension varies on a cosmological timescale cannot be the dark +energy. +" +Causally Regularized Learning with Agnostic Data Selection Bias," Most of previous machine learning algorithms are proposed based on the i.i.d. +hypothesis. However, this ideal assumption is often violated in real +applications, where selection bias may arise between training and testing +process. Moreover, in many scenarios, the testing data is not even available +during the training process, which makes the traditional methods like transfer +learning infeasible due to their need on prior of test distribution. Therefore, +how to address the agnostic selection bias for robust model learning is of +paramount importance for both academic research and real applications. In this +paper, under the assumption that causal relationships among variables are +robust across domains, we incorporate causal technique into predictive modeling +and propose a novel Causally Regularized Logistic Regression (CRLR) algorithm +by jointly optimize global confounder balancing and weighted logistic +regression. Global confounder balancing helps to identify causal features, +whose causal effect on outcome are stable across domains, then performing +logistic regression on those causal features constructs a robust predictive +model against the agnostic bias. To validate the effectiveness of our CRLR +algorithm, we conduct comprehensive experiments on both synthetic and real +world datasets. Experimental results clearly demonstrate that our CRLR +algorithm outperforms the state-of-the-art methods, and the interpretability of +our method can be fully depicted by the feature visualization. +" +The K-inductive Structure of the Noncommutative Fourier Transform," The noncommutative Fourier transform of the irrational rotation C*-algebra is +shown to have a K-inductive structure (at least for a large concrete class of +irrational parameters, containing dense $G_\delta$'s). This is a structure for +automorphisms that is analogous to Huaxin Lin's notion of tracially AF for +C*-algebras, except that it requires more structure from the complementary +projection. +" +Rigidity of inversive distance circle packings revisited," Inversive distance circle packing metric was introduced by P Bowers and K +Stephenson \cite{BS} as a generalization of Thurston's circle packing metric +\cite{T1}. They conjectured that the inversive distance circle packings are +rigid. For nonnegative inversive distance, Guo \cite{Guo} proved the +infinitesimal rigidity and then Luo \cite{L3} proved the global rigidity. In +this paper, based on an observation of Zhou \cite{Z}, we prove this conjecture +for inversive distance in $(-1, +\infty)$ by variational principles. We also +study the global rigidity of a combinatorial curvature introduced in +\cite{GJ4,GX4,GX6} with respect to the inversive distance circle packing +metrics where the inversive distance is in $(-1, +\infty)$. +" +Continuous and discrete one dimensional autonomous fractional ODEs," In this paper, we study 1D autonomous fractional ODEs $D_c^{\gamma}u=f(u), 0< +\gamma <1$, where $u: [0,\infty)\mapsto\mathbb{R}$ is the unknown function and +$D_c^{\gamma}$ is the generalized Caputo derivative introduced by Li and Liu ( +arXiv:1612.05103). Based on the existence and uniqueness theorem and regularity +results in previous work, we show the monotonicity of solutions to the +autonomous fractional ODEs and several versions of comparison principles. We +also perform a detailed discussion of the asymptotic behavior for $f(u)=Au^p$. +In particular, based on an Osgood type blow-up criteria, we find relatively +sharp bounds of the blow-up time in the case $A>0, p>1$. These bounds indicate +that as the memory effect becomes stronger ($\gamma\to 0$), if the initial +value is big, the blow-up time tends to zero while if the initial value is +small, the blow-up time tends to infinity. In the case $A<0, p>1$, we show that +the solution decays to zero more slowly compared with the usual derivative. +Lastly, we show several comparison principles and Grönwall inequalities for +discretized equations, and perform some numerical simulations to confirm our +analysis. +" +Learning Vector Autoregressive Models with Latent Processes," We study the problem of learning the support of transition matrix between +random processes in a Vector Autoregressive (VAR) model from samples when a +subset of the processes are latent. It is well known that ignoring the effect +of the latent processes may lead to very different estimates of the influences +among observed processes, and we are concerned with identifying the influences +among the observed processes, those between the latent ones, and those from the +latent to the observed ones. We show that the support of transition matrix +among the observed processes and lengths of all latent paths between any two +observed processes can be identified successfully under some conditions on the +VAR model. From the lengths of latent paths, we reconstruct the latent subgraph +(representing the influences among the latent processes) with a minimum number +of variables uniquely if its topology is a directed tree. Furthermore, we +propose an algorithm that finds all possible minimal latent graphs under some +conditions on the lengths of latent paths. Our results apply to both +non-Gaussian and Gaussian cases, and experimental results on various synthetic +and real-world datasets validate our theoretical results. +" +Development of Single-Shot Multi-Frame Imaging of Cylindrical Shock Waves in a Multi-Layered Assembly," We demonstrate single-shot multi-frame imaging of quasi-2D cylindrically +converging shock waves as they propagate through a multi-layer target sample +assembly. We visualize the shock with sequences of up to 16 images, using a +Fabry-Perot cavity to generate a pulse train that can be used in various +imaging configurations. We employ multi-frame shadowgraph and dark-field +imaging to measure the amplitude and phase of the light transmitted through the +shocked target. Single-shot multi-frame imaging tracks geometric distortion and +additional features in our images that were not previously resolvable in this +experimental geometry. Analysis of our images, in combination with simulations, +shows that the additional image features are formed by a coupled wave structure +resulting from interface effects in our targets. This technique presents a new +capability for tabletop imaging of shock waves that can be easily extended to +experiments at large-scale facilities. +" +Design rules for modulation-doped AlAs quantum wells," Thanks to their multi-valley, anisotropic, energy band structure, +two-dimensional electron systems (2DESs) in modulation-doped AlAs quantum wells +(QWs) provide a unique platform to investigate electron interaction physics and +ballistic transport. Indeed, a plethora of phenomena unseen in other 2DESs have +been observed over the past decade. However, a foundation for sample design is +still lacking for AlAs 2DESs, limiting the means to achieve optimal quality +samples. Here we present a systematic study on the fabrication of +modulation-doped AlAs and GaAs QWs over a wide range of AlxGa1-xAs barrier +alloy compositions. Our data indicate clear similarities in modulation doping +mechanisms for AlAs and GaAs, and provide guidelines for the fabrication of +very high quality AlAs 2DESs. We highlight the unprecedented quality of the +fabricated AlAs samples by presenting the magnetotransport data for low density +(~1X1011 cm2) AlAs 2DESs that exhibit high-order fractional quantum Hall +signatures. +" +Study of Clear Sky Models for Singapore," The estimation of total solar irradiance falling on the earth's surface is +important in the field of solar energy generation and forecasting. Several +clear-sky solar radiation models have been developed over the last few decades. +Most of these models are based on empirical distribution of various +geographical parameters; while a few models consider various atmospheric +effects in the solar energy estimation. In this paper, we perform a comparative +analysis of several popular clear-sky models, in the tropical region of +Singapore. This is important in countries like Singapore, where we are +primarily focused on reliable and efficient solar energy generation. We analyze +and compare three popular clear-sky models that are widely used in the +literature. We validate our solar estimation results using actual solar +irradiance measurements obtained from collocated weather stations. We finally +conclude the most reliable clear sky model for Singapore, based on all clear +sky days in a year. +" +Multiplicatively closed Markov models must form Lie algebras," We prove that the probability substitution matrices obtained from a +continuous-time Markov chain form a multiplicatively closed set if and only if +the rate matrices associated to the chain form a linear space spanning a Lie +algebra. The key original contribution we make is to overcome an obstruction, +due to the presence of inequalities that are unavoidable in the probabilistic +application, that prevents free manipulation of terms in the +Baker-Campbell-Haursdorff formula. +" +Polynomial upper bound on interior Steklov nodal sets," We study solutions of uniformly elliptic PDE with Lipschitz leading +coefficients and bounded lower order coefficients. We extend previous results +of A. Logunov concerning nodal sets of harmonic functions and, in particular, +prove polynomial upper bounds on interior nodal sets of Steklov eigenfunctions +in terms of the corresponding eigenvalue $ \lambda $. +" +Multiplicative Normalizing Flows for Variational Bayesian Neural Networks," We reinterpret multiplicative noise in neural networks as auxiliary random +variables that augment the approximate posterior in a variational setting for +Bayesian neural networks. We show that through this interpretation it is both +efficient and straightforward to improve the approximation by employing +normalizing flows while still allowing for local reparametrizations and a +tractable lower bound. In experiments we show that with this new approximation +we can significantly improve upon classical mean field for Bayesian neural +networks on both predictive accuracy as well as predictive uncertainty. +" +A categorical semantics for causal structure," We present a categorical construction for modelling causal structures within +a general class of process theories that include the theory of classical +probabilistic processes as well as quantum theory. Unlike prior constructions +within categorical quantum mechanics, the objects of this theory encode +fine-grained causal relationships between subsystems and give a new method for +expressing and deriving consequences for a broad class of causal structures. We +show that this framework enables one to define families of processes which are +consistent with arbitrary acyclic causal orderings. In particular, one can +define one-way signalling (a.k.a. semi-causal) processes, non-signalling +processes, and quantum $n$-combs. Furthermore, our framework is general enough +to accommodate recently-proposed generalisations of classical and quantum +theory where processes only need to have a fixed causal ordering locally, but +globally allow indefinite causal ordering. +To illustrate this point, we show that certain processes of this kind, such +as the quantum switch, the process matrices of Oreshkov, Costa, and Brukner, +and a classical three-party example due to Baumeler, Feix, and Wolf are all +instances of a certain family of processes we refer to as $\textrm{SOC}_n$ in +the appropriate category of higher-order causal processes. After defining these +families of causal structures within our framework, we give derivations of +their operational behaviour using simple, diagrammatic axioms. +" +Progressive Learning for Systematic Design of Large Neural Networks," We develop an algorithm for systematic design of a large artificial neural +network using a progression property. We find that some non-linear functions, +such as the rectifier linear unit and its derivatives, hold the property. The +systematic design addresses the choice of network size and regularization of +parameters. The number of nodes and layers in network increases in progression +with the objective of consistently reducing an appropriate cost. Each layer is +optimized at a time, where appropriate parameters are learned using convex +optimization. Regularization parameters for convex optimization do not need a +significant manual effort for tuning. We also use random instances for some +weight matrices, and that helps to reduce the number of parameters we learn. +The developed network is expected to show good generalization power due to +appropriate regularization and use of random weights in the layers. This +expectation is verified by extensive experiments for classification and +regression problems, using standard databases. +" +Sequential Monte Carlo algorithms for a class of outer measures," Closed-form stochastic filtering equations can be derived in a general +setting where probability distributions are replaced by some specific outer +measures. In this article, we study how the principles of the sequential Monte +Carlo method can be adapted for the purpose of practical implementation of +these equations. In particular, we explore how sampling can be used to provide +support points for the approximation of these outer measures. This step enables +practical algorithms to be derived in the spirit of particle filters. The +performance of the obtained algorithms is demonstrated in simulations and their +versatility is illustrated through various examples. +" +Charge-induced force-noise on free-falling test masses: results from LISA Pathfinder," We report on electrostatic measurements made on board the European Space +Agency mission LISA Pathfinder. Detailed measurements of the charge-induced +electrostatic forces exerted on free-falling test masses (TMs) inside the +capacitive gravitational reference sensor are the first made in a relevant +environment for a space-based gravitational wave detector. Employing a +combination of charge control and electric-field compensation, we show that the +level of charge-induced acceleration noise on a single TM can be maintained at +a level close to 1.0 fm/s^2/sqrt(Hz) across the 0.1-100 mHz frequency band that +is crucial to an observatory such as LISA. Using dedicated measurements that +detect these effects in the differential acceleration between the two test +masses, we resolve the stochastic nature of the TM charge build up due to +interplanetary cosmic rays and the TM charge-to-force coupling through stray +electric fields in the sensor. All our measurements are in good agreement with +predictions based on a relatively simple electrostatic model of the LISA +Pathfinder instrument. +" +Pseudoholomorphic Maps Relative to Normal Crossings Symplectic Divisors: Compactification," Inspired by the log Gromov-Witten theory of Gross-Siebert/Abramovich-Chen, we +introduce a geometric notion of log pseudoholomorphic map relative to simple +normal crossings symplectic divisors defined in [9]. For certain almost complex +structures, we show that the moduli space of stable log pseudoholomorphic maps +of any fixed type is compact and metrizable with respect to an enhancement of +the Gromov topology. In the case of smooth symplectic divisors, our +compactification is often smaller than the relative compactification and there +is a projection map from the former onto the latter. The latter is constructed +via expanded degenerations of the target. Our construction does not need any +modification of (or any extra structure on) the target. Unlike the classical +moduli spaces of stable maps, these log moduli spaces are often virtually +singular. We describe an explicit toric model for the normal cone to each +stratum in terms of the defining combinatorial data of that stratum. In +upcoming papers, we will define a natural Fredholm operator which gives us the +deformation/obstruction spaces of each stratum and prove a gluing theorem for +smoothing log maps in the normal direction to each stratum. With minor +modifications to the theory of Kuranishi structures, the latter would allow us +to construct a virtual fundamental class for every such log moduli space. +" +Improving Efficiency in Convolutional Neural Network with Multilinear Filters," The excellent performance of deep neural networks has enabled us to solve +several automatization problems, opening an era of autonomous devices. However, +current deep net architectures are heavy with millions of parameters and +require billions of floating point operations. Several works have been +developed to compress a pre-trained deep network to reduce memory footprint +and, possibly, computation. Instead of compressing a pre-trained network, in +this work, we propose a generic neural network layer structure employing +multilinear projection as the primary feature extractor. The proposed +architecture requires several times less memory as compared to the traditional +Convolutional Neural Networks (CNN), while inherits the similar design +principles of a CNN. In addition, the proposed architecture is equipped with +two computation schemes that enable computation reduction or scalability. +Experimental results show the effectiveness of our compact projection that +outperforms traditional CNN, while requiring far fewer parameters. +" +Combinatorial Penalties: Which structures are preserved by convex relaxations?," We consider the homogeneous and the non-homogeneous convex relaxations for +combinatorial penalty functions defined on support sets. Our study identifies +key differences in the tightness of the resulting relaxations through the +notion of the lower combinatorial envelope of a set-function along with new +necessary conditions for support identification. We then propose a general +adaptive estimator for convex monotone regularizers, and derive new sufficient +conditions for support recovery in the asymptotic setting. +" +Multispecies fruit flower detection using a refined semantic segmentation network," In fruit production, critical crop management decisions are guided by bloom +intensity, i.e., the number of flowers present in an orchard. Despite its +importance, bloom intensity is still typically estimated by means of human +visual inspection. Existing automated computer vision systems for flower +identification are based on hand-engineered techniques that work only under +specific conditions and with limited performance. This work proposes an +automated technique for flower identification that is robust to uncontrolled +environments and applicable to different flower species. Our method relies on +an end-to-end residual convolutional neural network (CNN) that represents the +state-of-the-art in semantic segmentation. To enhance its sensitivity to +flowers, we fine-tune this network using a single dataset of apple flower +images. Since CNNs tend to produce coarse segmentations, we employ a refinement +method to better distinguish between individual flower instances. Without any +pre-processing or dataset-specific training, experimental results on images of +apple, peach and pear flowers, acquired under different conditions demonstrate +the robustness and broad applicability of our method. +" +Mobile phone records to feed activity-based travel demand models: MATSim for studying a cordon toll policy in Barcelona," Activity-based models appeared as an answer to the limitations of the +traditional trip-based and tour-based four-stage models. The fundamental +assumption of activity-based models is that travel demand is originated from +people performing their daily activities. This is why they include a consistent +representation of time, of the persons and households, time-dependent routing, +and microsimulation of travel demand and traffic. In spite of their potential +to simulate traffic demand management policies, their practical application is +still limited. One of the main reasons is that these models require a huge +amount of very detailed input data hard to get with surveys. However, the +pervasive use of mobile devices has brought a valuable new source of data. The +work presented here has a twofold objective: first, to demonstrate the +capability of mobile phone records to feed activity-based transport models, +and, second, to assert the advantages of using activity-based models to +estimate the effects of traffic demand management policies. Activity diaries +for the metropolitan area of Barcelona are reconstructed from mobile phone +records. This information is then employed as input for building a transport +MATSim model of the city. The model calibration and validation process proves +the quality of the activity diaries obtained. The possible impacts of a cordon +toll policy applied to two different areas of the city and at different times +of the day is then studied. Our results show the way in which the modal share +is modified in each of the considered scenario. The possibility of evaluating +the effects of the policy at both aggregated and traveller level, together with +the ability of the model to capture policy impacts beyond the cordon toll area +confirm the advantages of activity-based models for the evaluation of traffic +demand management policies. +" +Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries," In this paper we analyze the gate activation signals inside the gated +recurrent neural networks, and find the temporal structure of such signals is +highly correlated with the phoneme boundaries. This correlation is further +verified by a set of experiments for phoneme segmentation, in which better +results compared to standard approaches were obtained. +" +Fully Convolutional Grasp Detection Network with Oriented Anchor Box," In this paper, we present a real-time approach to predict multiple grasping +poses for a parallel-plate robotic gripper using RGB images. A model with +oriented anchor box mechanism is proposed and a new matching strategy is used +during the training process. An end-to-end fully convolutional neural network +is employed in our work. The network consists of two parts: the feature +extractor and multi-grasp predictor. The feature extractor is a deep +convolutional neural network. The multi-grasp predictor regresses grasp +rectangles from predefined oriented rectangles, called oriented anchor boxes, +and classifies the rectangles into graspable and ungraspable. On the standard +Cornell Grasp Dataset, our model achieves an accuracy of 97.74% and 96.61% on +image-wise split and object-wise split respectively, and outperforms the latest +state-of-the-art approach by 1.74% on image-wise split and 0.51% on object-wise +split. +" +Wetting States of Two-Dimensional Drops under Gravity," An analytical model is proposed for the Young-Laplace equation of +two-dimensional (2D) drops under gravity. Inspired by the pioneering work of +Landau & Lifshitz (1987), we derive analytical expressions of the profile of +drops on flat surfaces, for arbitrary contact angles and drop volume. We then +extend our theory for drops on inclined surfaces and reveal that the contact +line plays a key role on the wetting state of the drops: (1) when the contact +line is completely pinning, the advancing and receding contact angles and the +shape of the drop can be uniquely determined by the predefined droplet volume, +sliding angle and contact area, which does not rely on the Young contact angle; +(2) when the drop has a movable contact line, it would achieve a wetting state +with a minimum free energy resulting from the competition between the surface +tension and gravity. Our theory is in excellent agreement with numerical +results. +" +Polarity-tunable magnetic tunnel junctions based on ferromagnetism at oxide heterointerfaces," Complex oxide systems have attracted considerable attention because of their +fascinating properties, including the magnetic ordering at the conducting +interface between two band insulators, such as LaAlO3 (LAO) and SrTiO3 (STO). +However, the manipulation of the spin degree of freedom at the LAO/STO +heterointerface has remained elusive. Here, we have fabricated hybrid magnetic +tunnel junctions consisting of Co and LAO/STO ferromagnets with the insertion +of a Ti layer in between, which clearly exhibit magnetic switching and the +tunnelling magnetoresistance (TMR) effect below 10 K. The magnitude and the of +the TMR are strongly dependent on the direction of the rotational magnetic +field parallel to the LAO/STO plane, which is attributed to a strong +Rashba-type spin orbit coupling in the LAO/STO heterostructure. Our study +provides a further support for the existence of the macroscopic ferromagnetism +at LAO/STO heterointerfaces and opens a novel route to realize interfacial +spintronics devices. +" +Pair-breaking of multi-gap superconductivity under parallel magnetic fields in electric-field-induced surface metallic state," Roles of paramagnetic and diamagnetic pair-breaking effects in +superconductivity in electric-field-induced surface metallic state are studied +by Bogoliubov-de Gennes equation, when magnetic fields are applied parallel to +the surface. The multi-gap states of sub-bands are related to the depth +dependence and the magnetic field dependence of superconductivity. In the +Fermi-energy density of states and the spin density, sub-band contributions +successively appear from higher-level sub-bands with increasing magnetic +fields. The characteristic magnetic field dependence may be a key feature to +identify the multi-gap structure of the surface superconductivity. +" +A Lottery Model for Center-type Problems With Outliers," In this paper, we give tight approximation algorithms for the $k$-center and +matroid center problems with outliers. Unfairness arises naturally in this +setting: certain clients could always be considered as outliers. To address +this issue, we introduce a lottery model in which each client $j$ is allowed to +submit a parameter $p_j \in [0,1]$ and we look for a random solution that +covers every client $j$ with probability at least $p_j$. Our techniques include +a randomized rounding procedure to round a point inside a matroid intersection +polytope to a basis plus at most one extra item such that all marginal +probabilities are preserved and such that a certain linear function of the +variables does not decrease in the process with probability one. +" +Exact solutions in two-dimensional topological superconductors: Hubbard interaction induced spontaneous symmetry breaking," We present an exactly solvable model of a spin-triplet $f$-wave topological +superconductor on the honeycomb lattice in the presence of the Hubbard +interaction for arbitrary interaction strength. First we show that the +Kane-Mele model with the corresponding spin-triplet $f$-wave superconducting +pairings becomes a full-gap topological superconductor possessing the +time-reversal symmetry. We then introduce the Hubbard interaction. The exactly +solvable condition is found to be the emergence of perfect flat bands at zero +energy. They generate infinitely many conserved quantities. It is intriguing +that the Hubbard interaction breaks the time-reversal symmetry spontaneously. +As a result, the system turns into a trivial superconductor. We demonstrate +this topological property based on the topological number and by analyzing the +edge state in nanoribbon geometry. +" +Seymour's second neighbourhood conjecture for quasi-transitive oriented graphs," Seymour's second neighbourhood conjecture asserts that every oriented graph +has a vertex whose second out-neighbourhood is at least as large as its +out-neighbourhood. In this paper, we prove that the conjecture holds for +quasi-transitive oriented graphs, which is a superclass of tournaments and +transitive acyclic digraphs. A digraph $D$ is called quasi-transitive is for +every pair $xy,yz$ of arcs between distinct vertices $x,y,z$, $xz$ or $zx$ +(""or"" is inclusive here) is in $D$. +" +Analyzing the network structure and gender differences among the members of the Networked Knowledge Organization Systems (NKOS) community," In this paper, we analyze a major part of the research output of the +Networked Knowledge Organization Systems (NKOS) community in the period 2000 to +2016 from a network analytical perspective. We focus on the papers presented at +the European and U.S. NKOS workshops and in addition four special issues on +NKOS in the last 16 years. For this purpose, we have generated an open dataset, +the ""NKOS bibliography"" which covers the bibliographic information of the +research output. We analyze the co-authorship network of this community which +results in 123 papers with a sum of 256 distinct authors. We use standard +network analytic measures such as degree, betweenness and closeness centrality +to describe the co-authorship network of the NKOS dataset. First, we +investigate global properties of the network over time. Second, we analyze the +centrality of the authors in the NKOS network. Lastly, we investigate gender +differences in collaboration behavior in this community. Our results show that +apart from differences in centrality measures of the scholars, they have higher +tendency to collaborate with those in the same institution or the same +geographic proximity. We also find that homophily is higher among women in this +community. Apart from small differences in closeness and clustering among men +and women, we do not find any significant dissimilarities with respect to other +centralities. +" +On a new closed formula for the solution of second order linear difference equations and applications," In this note, we establish a new closed formula for the solution of +homogeneous second-order linear difference equations with constant coefficients +by using matrix theory. This, in turn, gives new closed formulas concerning all +sequences of this type such as the Fibonacci and Lucas sequences. As +applications; we show that Binet's formula, in this case, is valid for negative +integers as well. Finally, we find new summation formulas relating the elements +of such sequences. +" +Flat bands in lattices with non-Hermitian coupling," We study non-Hermitian photonic lattices that exhibit competition between +conservative and non-Hermitian (gain/loss) couplings. A bipartite sublattice +symmetry enforces the existence of non-Hermitian flat bands, which are +typically embedded in an auxiliary dispersive band and give rise to +non-diffracting ""compact localized states"". Band crossings take the form of +non-Hermitian degeneracies known as exceptional points. Excitations of the +lattice can produce either diffracting or amplifying behaviors. If the +non-Hermitian coupling is fine-tuned to generate an effective $\pi$ flux, the +lattice spectrum becomes completely flat, a non-Hermitian analogue of +Aharonov-Bohm caging in which the magnetic field is replaced by balanced gain +and loss. When the effective flux is zero, the non-Hermitian band crossing +points give rise to asymmetric diffraction and anomalous linear amplification. +" +An ultra-sensitive and wideband magnetometer based on a superconducting quantum interference device," The magnetic field noise in superconducting quantum interference devices +(SQUIDs) used for biomagnetic research such as magnetoencephalography or +ultra-low-field nuclear magnetic resonance is usually limited by instrumental +dewar noise. We constructed a wideband, ultra-low noise system with a 45 mm +diameter superconducting pick-up coil inductively coupled to a current sensor +SQUID. Thermal noise in the liquid helium dewar is minimized by using +aluminized polyester fabric as superinsulation and aluminum oxide strips as +heat shields, respectively. With a magnetometer pick-up coil in the center of +the Berlin magneti- cally shielded room 2 (BMSR2) a noise level of around 150 +aT Hz$^{-1/2}$ is achieved in the white noise regime between about 20 kHz and +the system bandwidth of about 2.5 MHz. At lower frequencies, the resolution is +limited by magnetic field noise arising from the walls of the shielded room. +Modeling the BMSR2 as a closed cube with continuous \mu-metal walls we can +quantitatively reproduce its measured field noise. +" +Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations," We introduce physics informed neural networks -- neural networks that are +trained to solve supervised learning tasks while respecting any given law of +physics described by general nonlinear partial differential equations. In this +two part treatise, we present our developments in the context of solving two +main classes of problems: data-driven solution and data-driven discovery of +partial differential equations. Depending on the nature and arrangement of the +available data, we devise two distinct classes of algorithms, namely continuous +time and discrete time models. The resulting neural networks form a new class +of data-efficient universal function approximators that naturally encode any +underlying physical laws as prior information. In this first part, we +demonstrate how these networks can be used to infer solutions to partial +differential equations, and obtain physics-informed surrogate models that are +fully differentiable with respect to all input coordinates and free parameters. +" +Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL 2017)," The large scale of scholarly publications poses a challenge for scholars in +information seeking and sensemaking. Bibliometrics, information retrieval (IR), +text mining and NLP techniques could help in these search and look-up +activities, but are not yet widely used. This workshop is intended to stimulate +IR researchers and digital library professionals to elaborate on new approaches +in natural language processing, information retrieval, scientometrics, text +mining and recommendation techniques that can advance the state-of-the-art in +scholarly document understanding, analysis, and retrieval at scale. The BIRNDL +workshop at SIGIR 2017 will incorporate an invited talk, paper sessions and the +third edition of the Computational Linguistics (CL) Scientific Summarization +Shared Task. +" +Dimension free $L^p$-bounds of maximal functions associated to products of Euclidean balls," A few years ago, Bourgain proved that the centered Hardy-Littlewood maximal +function for the cube has dimension free $L^p$-bounds for $p>1$. We extend his +result to products of Euclidean balls of different dimensions. In addition, we +provide dimension free $L^p$-bounds for the maximal function associated to +products of Euclidean spheres for $p > \frac{N}{N-1}$ and $N \ge 3$, where +$N-1$ is the lowest occurring dimension of a single sphere. The aforementioned +result is obtained from the latter one by applying the method of rotations from +Stein's pioneering work on the spherical maximal function. +" +Pulling in models of cell migration," There are numerous scenarios in which populations of cells migrate in crowded +environments. Typical examples include wound healing, cancer growth and embryo +development. In these crowded environments cells are able to interact with each +other in a variety of ways. These include excluded volume interactions, +adhesion, repulsion, cell signalling, pushing and pulling. One popular way to +understand the behaviour of a group of interacting cells is through an +agent-based model (ABM). A typical aim of modellers using such represtations is +to elucidate how the microscopic interactions at the cell-level impact on the +macroscopic behaviour of the population. The complex cell-cell interactions +listed above have also been incorporated into such models; all apart from +cell-cell pulling. In this paper we consider this under-represented cell-cell +interaction, in which an active cell is able to `pull' a nearby neighbour as it +moves. We incorporate a variety of potential cell-cell pulling mechanisms into +on- and off-lattice agent-based volume exclusion models of cell movement. For +each of these agent-based models we derive a continuum partial differential +equation which describes the evolution of the cells at a population-level. We +study the agreement between the ABMs and the continuum, population-based +models, and compare and contrast a range of ABMs (accounting for the different +pulling mechanisms) with each other. We find generally good agreement between +the ABMs and the corresponding continuum models that worsens as the agent-based +models become more complex. Interestingly, we observe that the partial +differential equations that we derive differ significantly, depending on +whether they were derived from on- or off-lattice ABMs of pulling. This hints +that it is important employ the appropriate ABM when representing pulling +cell-cell interactions. +" +Spatially - resolved study of the Meissner effect in superconductors using NV-centers-in-diamond optical magnetometry," Non-invasive magnetic field sensing using optically - detected magnetic +resonance of nitrogen-vacancy (NV) centers in diamond was used to study spatial +distribution of the magnetic induction upon penetration and expulsion of weak +magnetic fields in several representative superconductors. Vector magnetic +fields were measured on the surface of conventional, Pb and Nb, and +unconventional, LuNi$_2$B$_2$C, Ba$_{0.6}$K$_{0.4}$Fe$_2$As$_2$, +Ba(Fe$_{0.93}$Co$_{0.07}$)$_2$As$_2$, and CaKFe$_4$As$_4$, superconductors, +with diffraction - limited spatial resolution using variable - temperature +confocal system. Magnetic induction profiles across the crystal edges were +measured in zero-field-cooled (ZFC) and field-cooled (FC) conditions. While all +superconductors show nearly perfect screening of magnetic fields applied after +cooling to temperatures well below the superconducting transition, $T_c$, a +range of very different behaviors was observed for Meissner expulsion upon +cooling in static magnetic field from above $T_c$. Substantial conventional +Meissner expulsion is found in LuNi$_2$B$_2$C, paramagnetic Meissner effect +(PME) is found in Nb, and virtually no expulsion is observed in iron-based +superconductors. In all cases, good correlation with macroscopic measurements +of total magnetic moment is found. Our measurements of the spatial distribution +of magnetic induction provide insight into microscopic physics of the Meissner +effect. +" +A magnetic skyrmion as a non-linear resistive element - a potential building block for reservoir computing," Inspired by the human brain, there is a strong effort to find alternative +models of information processing capable of imitating the high energy +efficiency of neuromorphic information processing. One possible realization of +cognitive computing are reservoir computing networks. These networks are built +out of non-linear resistive elements which are recursively connected. We +propose that a skyrmion network embedded in frustrated magnetic films may +provide a suitable physical implementation for reservoir computing +applications. The significant key ingredient of such a network is a +two-terminal device with non-linear voltage characteristics originating from +single-layer magnetoresistive effects, like the anisotropic magnetoresistance +or the recently discovered non-collinear magnetoresistance. The most basic +element for a reservoir computing network built from ""skyrmion fabrics"" is a +single skyrmion embedded in a ferromagnetic ribbon. In order to pave the way +towards reservoir computing systems based on skyrmion fabrics, here we simulate +and analyze i) the current flow through a single magnetic skyrmion due to the +anisotropic magneto-resistive effect and ii) the combined physics of local +pinning and the anisotropic magneto-resistive effect. +" +Bayesian optimal designs for dose-response curves with common parameters," The issue of determining not only an adequate dose but also a dosing +frequency of a drug arises frequently in Phase II clinical trials. This results +in the comparison of models which have some parameters in common. Planning such +studies based on Bayesian optimal designs offers robustness to our conclusions +since these designs, unlike locally optimal designs, are efficient even if the +parameters are misspecified. In this paper we develop approximate design theory +for Bayesian $D$-optimality for nonlinear regression models with common +parameters and investigate the cases of common location or common location and +scale parameters separately. Analytical characterisations of saturated Bayesian +$D$-optimal designs are derived for frequently used dose-response models and +the advantages of our results are illustrated via a numerical investigation. +" +3D Consistent & Robust Segmentation of Cardiac Images by Deep Learning with Spatial Propagation," We propose a method based on deep learning to perform cardiac segmentation on +short axis MRI image stacks iteratively from the top slice (around the base) to +the bottom slice (around the apex). At each iteration, a novel variant of U-net +is applied to propagate the segmentation of a slice to the adjacent slice below +it. In other words, the prediction of a segmentation of a slice is dependent +upon the already existing segmentation of an adjacent slice. 3D-consistency is +hence explicitly enforced. The method is trained on a large database of 3078 +cases from UK Biobank. It is then tested on 756 different cases from UK Biobank +and three other state-of-the-art cohorts (ACDC with 100 cases, Sunnybrook with +30 cases, RVSC with 16 cases). Results comparable or even better than the +state-of-the-art in terms of distance measures are achieved. They also +emphasize the assets of our method, namely enhanced spatial consistency +(currently neither considered nor achieved by the state-of-the-art), and the +generalization ability to unseen cases even from other databases. +" +"A Fast Method to Calculate Kendall Correlations of Large, Sparse Spike Trains"," Despite the fact that the Kendall correlation is becoming recognized as an +important tool in neuroscience, most standard algorithms to compute the Kendall +correlation of large spike trains are computationally very slow. Here we +present a new, significantly faster method for calculating the Kendall +correlation of spike trains that takes advantage of their structure. We show +that our method is more than 300 times faster than traditional approaches for +particularly large, sparse spike trains, and still more than 10 times faster +for large, but considerably less sparse, spike trains. A MATLAB function +executing the method described here was made freely available on-line. +" +On-the-Fly Adaptation of Regression Forests for Online Camera Relocalisation," Camera relocalisation is an important problem in computer vision, with +applications in simultaneous localisation and mapping, virtual/augmented +reality and navigation. Common techniques either match the current image +against keyframes with known poses coming from a tracker, or establish 2D-to-3D +correspondences between keypoints in the current image and points in the scene +in order to estimate the camera pose. Recently, regression forests have become +a popular alternative to establish such correspondences. They achieve accurate +results, but must be trained offline on the target scene, preventing +relocalisation in new environments. In this paper, we show how to circumvent +this limitation by adapting a pre-trained forest to a new scene on the fly. Our +adapted forests achieve relocalisation performance that is on par with that of +offline forests, and our approach runs in under 150ms, making it desirable for +real-time systems that require online relocalisation. +" +High-Kinetic Inductance Additive Manufactured Superconducting Microwave Cavity," Investigations into the microwave surface impedance of superconducting +resonators have led to the development of single photon counters that rely on +kinetic inductance for their operation. While concurrent progress in additive +manufacturing, `3D printing', opens up a previously inaccessible design space +for waveguide resonators. In this manuscript, we present results from the first +synthesis of these two technologies in a titanium, aluminum, vanadium +(Ti-6Al-4V) superconducting radio frequency resonator which exploits a design +unattainable through conventional fabrication means. We find that Ti-6Al-4V has +two distinct superconducting transition temperatures observable in heat +capacity measurements. The higher transition temperature is in agreement with +DC resistance measurements. While the lower transition temperature, not +previously known in literature, is consistent with the observed temperature +dependence of the superconducting microwave surface impedance. From the surface +reactance, we extract a London penetration depth of $8\pm3{\mu}$m - roughly an +order of magnitude larger than other titanium alloys and several orders of +magnitude larger than other conventional elemental superconductors. This large +London penetration depth suggests that Ti-6Al-4V may be a suitable material for +high kinetic inductance applications such as single photon counting or +parametric amplification used in quantum computing. +" +Known Boundary Emulation of Complex Computer Models," Computer models are now widely used across a range of scientific disciplines +to describe various complex physical systems, however to perform full +uncertainty quantification we often need to employ emulators. An emulator is a +fast statistical construct that mimics the complex computer model, and greatly +aids the vastly more computationally intensive uncertainty quantification +calculations that a serious scientific analysis often requires. In some cases, +the complex model can be solved far more efficiently for certain parameter +settings, leading to boundaries or hyperplanes in the input parameter space +where the model is essentially known. We show that for a large class of +Gaussian process style emulators, multiple boundaries can be formally +incorporated into the emulation process, by Bayesian updating of the emulators +with respect to the boundaries, for trivial computational cost. The resulting +updated emulator equations are given analytically. This leads to emulators that +possess increased accuracy across large portions of the input parameter space. +We also describe how a user can incorporate such boundaries within standard +black box GP emulation packages that are currently available, without altering +the core code. Appropriate designs of model runs in the presence of known +boundaries are then analysed, with two kinds of general purpose designs +proposed. We then apply the improved emulation and design methodology to an +important systems biology model of hormonal crosstalk in Arabidopsis Thaliana. +" +Investigating the role of musical genre in human perception of music stretching resistance," To stretch a music piece to a given length is a common demand in people's +daily lives, e.g., in audio-video synchronization and animation production. +However, it is not always guaranteed that the stretched music piece is +acceptable for general audience since music stretching suffers from people's +perceptual artefacts. Over-stretching a music piece will make it uncomfortable +for human psychoacoustic hearing. The research on music stretching resistance +attempts to estimate the maximum stretchability of music pieces to further +avoid over-stretch. It has been observed that musical genres can significantly +improve the accuracy of automatic estimation of music stretching resistance, +but how musical genres are related to music stretching resistance has never +been explained or studied in detail in the literature. In this paper, the +characteristics of music stretching resistance are compared across different +musical genres. It is found that music stretching resistance has strong +intra-genre cohesiveness and inter-genre discrepancies in the experiments. +Moreover, the ambiguity and the symmetry of music stretching resistance are +also observed in the experimental analysis. These findings lead to a new +measurement on the similarity between different musical genres based on their +music stretching resistance. In addition, the analysis of variance (ANOVA) also +supports the findings in this paper by verifying the significance of musical +genre in shaping music stretching resistance. +" +Emerging Market Corporate Bonds as First-to-Default Baskets," Emerging market hard-currency bonds are an asset class of growing importance, +and contain exposure to an EM sovereign and the underlying industry. The +authors investigate how to model this as a modification of the well-known +first-to-default (FtD) basket, using the structural model, and find the +approach feasible. +" +Fabrication of plasmonic surface relief gratings for the application of band-pass filter in UV-Visible spectral range," The measured experimental results of optical diffraction of 10, 5 and 3.4 +micrometer period plasmonic surface relief grating are presented for the +application of band-pass filter in visible spectral range. Conventional +scanning electron microscopic (SEM) is used to fabricate the grating structures +on the silver halide based film (substrate) by exposing the electron beam in +raster scan fashion. Morphological characterization of the gratings is +performed by atomic force microscopy (AFM) shows that the period, height and +profile depends on the line per frame, beam spot, single line dwell time, beam +current, and accelerating voltage of the electron beam. Optical transmission +spectra of 10 micrometer period grating shows a well-defined localized surface +plasmon resonance (LSPR) dip at ~366 nm wavelength corresponding to gelatin +embedded silver nanoparticles of the grating structure. As the period of the +grating reduces LSPR dip becomes prominent. The maximum first order diffraction +efficiency (DE) and bandwidth for 10 micrometer period grating are observed as +4% and 400 nm in 350 nm to 800 nm wavelength range respectively. The DE and +bandwidth are reduced up to 0.03% and 100 nm for 3.4 micrometer period grating. +The profile of DE is significantly flat within the diffraction bandwidth for +each of the gratings. An assessment of the particular role of LSPR absorption +and varied grating period in the development of the profile of first order DE +v/s wavelength are studied. Fabrication of such nano-scale structures in a +large area using conventional SEM and silver halide based films may provide the +simple and efficient technique for various optical devices applications. +" +Metrics On $S^2$ With Bounded $\|K_g\|_{L^1\log L^1}$ And Small $\|K_g-1\|_{L^1}$," In this short paper, we will study the convergence of a metric sequence $g_k$ +on $S^2$ with bounded $\int|K_{g_k}|\log(1+|K_{g_k}|)d\mu_{g_k}$ and small +$\int|K(g_k)-1\|d\mu_{g_k}$. We will show that such a sequence is precompact. +" +Light traffic behavior under the power-of-two load balancing strategy: The case of heterogeneous servers," We consider a multi-server queueing system under the power-of-two policy with +Poisson job arrivals, heterogeneous servers and a general job requirement +distribution; each server operates under the first-come first-serve policy and +there are no buffer constraints. We analyze the performance of this system in +light traffic by evaluating the first two light traffic derivatives of the +average job response time. These expressions point to several interesting +structural features associated with server heterogeneity in light traffic: For +unequal capacities, the average job response time is seen to decrease for small +values of the arrival rate, and the more diverse the server speeds, the greater +the gain in performance. These theoretical findings are assessed through +limited simulations. +" +Deep Learning with Dynamic Computation Graphs," Neural networks that compute over graph structures are a natural fit for +problems in a variety of domains, including natural language (parse trees) and +cheminformatics (molecular graphs). However, since the computation graph has a +different shape and size for every input, such networks do not directly support +batched training or inference. They are also difficult to implement in popular +deep learning libraries, which are based on static data-flow graphs. We +introduce a technique called dynamic batching, which not only batches together +operations between different input graphs of dissimilar shape, but also between +different nodes within a single input graph. The technique allows us to create +static graphs, using popular libraries, that emulate dynamic computation graphs +of arbitrary shape and size. We further present a high-level library of +compositional blocks that simplifies the creation of dynamic graph models. +Using the library, we demonstrate concise and batch-wise parallel +implementations for a variety of models from the literature. +" +Requirements for Secure Clock Synchronization," This paper establishes a fundamental theory of secure clock synchronization. +Accurate clock synchronization is the backbone of systems managing power +distribution, financial transactions, telecommunication operations, database +services, etc. Some clock synchronization (time transfer) systems, such as the +Global Navigation Satellite Systems (GNSS), are based on one-way communication +from a master to a slave clock. Others, such as the Network Transport Protocol +(NTP), and the IEEE 1588 Precision Time Protocol (PTP), involve two-way +communication between the master and slave. This paper shows that all one-way +time transfer protocols are vulnerable to replay attacks that can potentially +compromise timing information. A set of conditions for secure two-way clock +synchronization is proposed and proved to be necessary and sufficient. It is +shown that IEEE 1588 PTP, although a two-way synchronization protocol, is not +compliant with these conditions, and is therefore insecure. Requirements for +secure IEEE 1588 PTP are proposed, and a second example protocol is offered to +illustrate the range of compliant systems. +" +Stability of fractional-order nonlinear systems by Lyapunov direct method," In this paper, by using a characterization of functions having fractional +derivative, we propose a rigorous fractional Lyapunov function candidate method +to analyze stability of fractional-order nonlinear systems. First, we prove an +inequality concerning the fractional derivatives of convex Lyapunov functions +without the assumption on the existence of derivative of pseudo-states. Second, +we establish fractional Lyapunov functions to fractional-order systems without +the assumption on the global existence of solutions. Our theorems fill the gaps +and strengthen results in some existing papers. +" +Scaling laws during collapse of a homopolymer: Lattice versus off-lattice," We present comparative results from simulations of a lattice and an +off-lattice model of a homopolymer, in the context of kinetics of the collapse +transition. Scaling laws related to the collapse time, cluster coarsening and +aging behavior are compared. Although in both models the cluster growth is +independent of temperature, the related exponents turn out to be different. +Conversely, the aging and associated scaling properties are found to be +universal, with the nonequilibrium autocorrelation exponent obeying a recently +derived bound. +" +Efficient Convex Optimization with Membership Oracles," We consider the problem of minimizing a convex function over a convex set +given access only to an evaluation oracle for the function and a membership +oracle for the set. We give a simple algorithm which solves this problem with +$\tilde{O}(n^2)$ oracle calls and $\tilde{O}(n^3)$ additional arithmetic +operations. Using this result, we obtain more efficient reductions among the +five basic oracles for convex sets and functions defined by Grötschel, Lovasz +and Schrijver. +" +On the Prime Graph Question for Integral Group Rings of Conway simple groups," The Prime Graph Question for integral group rings asks if it is true that if +the normalized unit group of the integral group ring of a finite group $G$ +contains an element of order $pq$, for some primes $p$ and $q$, also $G$ +contains an element of that order. We answer this question for the three Conway +sporadic simple groups after reducing it to a combinatorial question about +Young tableaus and Littlewood-Richardson coefficients. This finishes work of V. +Bovdi, A. Konovalov and S. Linton. +" +Challenges in Data-to-Document Generation," Recent neural models have shown significant progress on the problem of +generating short descriptive texts conditioned on a small number of database +records. In this work, we suggest a slightly more difficult data-to-text +generation task, and investigate how effective current approaches are on this +task. In particular, we introduce a new, large-scale corpus of data records +paired with descriptive documents, propose a series of extractive evaluation +methods for analyzing performance, and obtain baseline results using current +neural generation methods. Experiments show that these models produce fluent +text, but fail to convincingly approximate human-generated documents. Moreover, +even templated baselines exceed the performance of these neural models on some +metrics, though copy- and reconstruction-based extensions lead to noticeable +improvements. +" +The covering type of closed surfaces and minimal triangulations," The notion of covering type was recently introduced by Karoubi and Weibel to +measure the complexity of a topological space by means of good coverings. When +X has the homotopy type of a finite CW-complex, its covering type coincides +with the minimum possible number of vertices of a simplicial complex homotopy +equivalent to X. In this article we compute the covering type of all closed +surfaces. Our results completely settle a problem posed by Karoubi and Weibel, +and shed more light on the relationship between the topology of surfaces and +the number of vertices of minimal triangulations from a homotopy point of view. +" +Relaxation heuristics for the set multicover problem with generalized upper bound constraints," We consider an extension of the set covering problem (SCP) introducing +(i)~multicover and (ii)~generalized upper bound (GUB)~constraints. For the +conventional SCP, the pricing method has been introduced to reduce the size of +instances, and several efficient heuristic algorithms based on such reduction +techniques have been developed to solve large-scale instances. However, GUB +constraints often make the pricing method less effective, because they often +prevent solutions from containing highly evaluated variables together. To +overcome this problem, we develop heuristic algorithms to reduce the size of +instances, in which new evaluation schemes of variables are introduced taking +account of GUB constraints. We also develop an efficient implementation of a +2-flip neighborhood local search algorithm that reduces the number of +candidates in the neighborhood without sacrificing the solution quality. In +order to guide the search to visit a wide variety of good solutions, we also +introduce a path relinking method that generates new solutions by combining two +or more solutions obtained so far. According to computational comparison on +benchmark instances, the proposed method succeeds in selecting a small number +of promising variables properly and performs quite effectively even for +large-scale instances having hard GUB constraints. +" +Optimal percolation on multiplex networks," Optimal percolation is the problem of finding the minimal set of nodes such +that if the members of this set are removed from a network, the network is +fragmented into non-extensive disconnected clusters. The solution of the +optimal percolation problem has direct applicability in strategies of +immunization in disease spreading processes, and influence maximization for +certain classes of opinion dynamical models. In this paper, we consider the +problem of optimal percolation on multiplex networks. The multiplex scenario +serves to realistically model various technological, biological, and social +networks. We find that the multilayer nature of these systems, and more +precisely multiplex characteristics such as edge overlap and interlayer +degree-degree correlation, profoundly changes the properties of the set of +nodes identified as the solution of the optimal percolation problem. +" +Nearly Spectral Spaces," We study some natural generalizations of the spectral spaces in the contexts +of commutative rings and distributive lattices. We obtain a topological +characterization for the spectra of commutative (not necessarily unitary) rings +and we find spectral versions for the up-spectral and down-spectral spaces. We +show that the duality between distributive lattices and Balbes-Dwinger spaces +is the co-equivalence associated to a pair of contravariant right adjoint +functors between suitable categories. +" +Speeding Up Latent Variable Gaussian Graphical Model Estimation via Nonconvex Optimizations," We study the estimation of the latent variable Gaussian graphical model +(LVGGM), where the precision matrix is the superposition of a sparse matrix and +a low-rank matrix. In order to speed up the estimation of the sparse plus +low-rank components, we propose a sparsity constrained maximum likelihood +estimator based on matrix factorization, and an efficient alternating gradient +descent algorithm with hard thresholding to solve it. Our algorithm is orders +of magnitude faster than the convex relaxation based methods for LVGGM. In +addition, we prove that our algorithm is guaranteed to linearly converge to the +unknown sparse and low-rank components up to the optimal statistical precision. +Experiments on both synthetic and genomic data demonstrate the superiority of +our algorithm over the state-of-the-art algorithms and corroborate our theory. +" +The third data release of the Kilo-Degree Survey and associated data products," The Kilo-Degree Survey (KiDS) is an ongoing optical wide-field imaging survey +with the OmegaCAM camera at the VLT Survey Telescope. It aims to image 1500 +square degrees in four filters (ugri). The core science driver is mapping the +large-scale matter distribution in the Universe, using weak lensing shear and +photometric redshift measurements. Further science cases include galaxy +evolution, Milky Way structure, detection of high-redshift clusters, and +finding rare sources such as strong lenses and quasars. Here we present the +third public data release (DR3) and several associated data products, adding +further area, homogenized photometric calibration, photometric redshifts and +weak lensing shear measurements to the first two releases. A dedicated pipeline +embedded in the Astro-WISE information system is used for the production of the +main release. Modifications with respect to earlier releases are described in +detail. Photometric redshifts have been derived using both Bayesian template +fitting, and machine-learning techniques. For the weak lensing measurements, +optimized procedures based on the THELI data reduction and lensfit shear +measurement packages are used. In DR3 stacked ugri images, weight maps, masks, +and source lists for 292 new survey tiles (~300 sq.deg) are made available. The +multi-band catalogue, including homogenized photometry and photometric +redshifts, covers the combined DR1, DR2 and DR3 footprint of 440 survey tiles +(447 sq.deg). Limiting magnitudes are typically 24.3, 25.1, 24.9, 23.8 (5 sigma +in a 2 arcsec aperture) in ugri, respectively, and the typical r-band PSF size +is less than 0.7 arcsec. The photometric homogenization scheme ensures accurate +colors and an absolute calibration stable to ~2% for gri and ~3% in u. +Separately released are a weak lensing shear catalogue and photometric +redshifts based on two different machine-learning techniques. +" +Hydrogen Line Observations of Cometary Spectra at 1420 Mhz," In 2016, the Center for Planetary Science proposed a hypothesis arguing a +comet and/or its hydrogen cloud were a strong candidate for the source of the +Wow! Signal. From 27 November 2016 to 24 February 2017, the Center for +Planetary Science conducted 200 observations in the radio spectrum to validate +the hypothesis. The investigation discovered that comet 266/P Christensen +emitted a radio signal at 1420.25 MHz. The results of this investigation, +therefore, conclude that cometary spectra are detectable at 1420 MHz and, more +importantly, that the 1977 Wow! Signal was a natural phenomenon from a Solar +System body. +" +Pain-Free Random Differential Privacy with Sensitivity Sampling," Popular approaches to differential privacy, such as the Laplace and +exponential mechanisms, calibrate randomised smoothing through global +sensitivity of the target non-private function. Bounding such sensitivity is +often a prohibitively complex analytic calculation. As an alternative, we +propose a straightforward sampler for estimating sensitivity of non-private +mechanisms. Since our sensitivity estimates hold with high probability, any +mechanism that would be $(\epsilon,\delta)$-differentially private under +bounded global sensitivity automatically achieves +$(\epsilon,\delta,\gamma)$-random differential privacy (Hall et al., 2012), +without any target-specific calculations required. We demonstrate on worked +example learners how our usable approach adopts a naturally-relaxed privacy +guarantee, while achieving more accurate releases even for non-private +functions that are black-box computer programs. +" +Extremal Kaehler-Einstein metric for two-dimensional convex bodies," Given a convex body $K \subset \mathbb{R}^n$ with the barycenter at the +origin we consider the corresponding K{ä}hler-Einstein equation $e^{-\Phi} = +\det D^2 \Phi$. If $K$ is a simplex, then the Ricci tensor of the Hessian +metric $D^2 \Phi$ is constant and equals $\frac{n-1}{4(n+1)}$. We conjecture +that the Ricci tensor of $D^2 \Phi$ for arbitrary $K$ is uniformly bounded by +$\frac{n-1}{4(n+1)}$ and verify this conjecture in the two-dimensional case. +The general case remains open. +" +Socio-spatial Self-organizing Maps: Using Social Media to Assess Relevant Geographies for Exposure to Social Processes," Social media offers a unique window into attitudes like racism and +homophobia, exposure to which are important, hard to measure and understudied +social determinants of health. However, individual geo-located observations +from social media are noisy and geographically inconsistent. Existing areas by +which exposures are measured, like Zip codes, average over irrelevant +administratively-defined boundaries. Hence, in order to enable studies of +online social environmental measures like attitudes on social media and their +possible relationship to health outcomes, first there is a need for a method to +define the collective, underlying degree of social media attitudes by region. +To address this, we create the Socio-spatial-Self organizing map, ""SS-SOM"" +pipeline to best identify regions by their latent social attitude from Twitter +posts. SS-SOMs use neural embedding for text-classification, and augment +traditional SOMs to generate a controlled number of non-overlapping, +topologically-constrained and topically-similar clusters. We find that not only +are SS-SOMs robust to missing data, the exposure of a cohort of men who are +susceptible to multiple racism and homophobia-linked health outcomes, changes +by up to 42% using SS-SOM measures as compared to using Zip code-based +measures. +" +A Large-Scale Study of Language Models for Chord Prediction," We conduct a large-scale study of language models for chord prediction. +Specifically, we compare N-gram models to various flavours of recurrent neural +networks on a comprehensive dataset comprising all publicly available datasets +of annotated chords known to us. This large amount of data allows us to +systematically explore hyper-parameter settings for the recurrent neural +networks---a crucial step in achieving good results with this model class. Our +results show not only a quantitative difference between the models, but also a +qualitative one: in contrast to static N-gram models, certain RNN +configurations adapt to the songs at test time. This finding constitutes a +further step towards the development of chord recognition systems that are more +aware of local musical context than what was previously possible. +" +A Novel Method of Subgroup Identification by Combining Virtual Twins with GUIDE (VG) for Development of Precision Medicines," A lack of understanding of human biology creates a hurdle for the development +of precision medicines. To overcome this hurdle we need to better understand +the potential synergy between a given investigational treatment (vs. placebo or +active control) and various demographic or genetic factors, disease history and +severity, etc., with the goal of identifying those patients at increased risk +of exhibiting clinically meaningful treatment benefit. For this reason, we +propose the VG method, which combines the idea of an individual treatment +effect (ITE) from Virtual Twins (Foster, et al., 2011) with the unbiased +variable selection and cutoff value determination algorithm from GUIDE (Loh, et +al., 2015). Simulation results show the VG method has less variable selection +bias than Virtual Twins and higher statistical power than GUIDE Interaction in +the presence of prognostic variables with strong treatment effects. Type I +error and predictive performance of Virtual Twins, GUIDE and VG are compared +through the use of simulation studies. Results obtained after retrospectively +applying VG to data from a clinical trial also are discussed. +" +Critical Slowing Down of Quadrupole and Hexadecapole Orderings in Iron Pnictide Superconductor," Ultrasonic measurements have been carried out to investigate the critical +dynamics of structural and superconducting transitions due to degenerate +orbital bands in iron pnictide compounds with the formula +Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$. The attenuation coefficient +$\alpha_{\mathrm{L}[110]}$ of the longitudinal ultrasonic wave for +$(C_{11}+C_{12}+2C_{66})/2$ for $x = 0.036$ reveals the critical slowing down +of the relaxation time around the structural transition at $T_\mathrm{s} = 65$ +K, which is caused by ferro-type ordering of the quadrupole $O_{x'^2-y'^2}$ +coupled to the strain $\varepsilon_{xy}$. The attenuation coefficient +$\alpha_{66}$ of the transverse ultrasonic wave for $C_{66}$ for $x = 0.071$ +also exhibits the critical slowing down around the superconducting transition +at $T_\mathrm{SC} = 23$ K, which is caused by ferro-type ordering of the +hexadecapole $H_z^\alpha \bigl( \boldsymbol{r}_i, \boldsymbol{r}_j \bigr) = +O_{x'y'}\bigl( \boldsymbol{r}_i \bigr) O_{x'^2 - y'^2}\bigl( \boldsymbol{r}_j +\bigr) ++ O_{x'^2 - y'^2}\bigl( \boldsymbol{r}_i \bigr) O_{x'y'}\bigl( +\boldsymbol{r}_j \bigr)$ of the bound two-electron state coupled to the +rotation $\omega_{xy}$. It is proposed that the hexadecapole ordering +associated with the superconductivity brings about spontaneous rotation of the +macroscopic superconducting state with respect to the host tetragonal lattice. +" +On the Interaction between Autonomous Mobility-on-Demand and Public Transportation Systems," In this paper we study models and coordination policies for intermodal +Autonomous Mobility-on-Demand (AMoD), wherein a fleet of self-driving vehicles +provides on-demand mobility jointly with public transit. Specifically, we first +present a network flow model for intermodal AMoD, where we capture the coupling +between AMoD and public transit and the goal is to maximize social welfare. +Second, leveraging such a model, we design a pricing and tolling scheme that +allows to achieve the social optimum under the assumption of a perfect market +with selfish agents. Finally, we present a real-world case study for New York +City. Our results show that the coordination between AMoD fleets and public +transit can yield significant benefits compared to an AMoD system operating in +isolation. +" +"The class of second order quasilinear equations: models, solutions and background of classification"," The paper is concerned with the unsteady solutions to the model of mutually +penetrating continua and quasilinear hyperbolic modification of the Burgers +equation (QHMB). The studies were focused on the peculiar solutions of models +in question. On the base of these models and their solutions, the ideas of +second order quasilinear models classification were developed. +" +Solving Almost all Systems of Random Quadratic Equations," This paper deals with finding an $n$-dimensional solution $x$ to a system of +quadratic equations of the form $y_i=|\langle{a}_i,x\rangle|^2$ for $1\le i \le +m$, which is also known as phase retrieval and is NP-hard in general. We put +forth a novel procedure for minimizing the amplitude-based least-squares +empirical loss, that starts with a weighted maximal correlation initialization +obtainable with a few power or Lanczos iterations, followed by successive +refinements based upon a sequence of iteratively reweighted (generalized) +gradient iterations. The two (both the initialization and gradient flow) stages +distinguish themselves from prior contributions by the inclusion of a fresh +(re)weighting regularization technique. The overall algorithm is conceptually +simple, numerically scalable, and easy-to-implement. For certain random +measurement models, the novel procedure is shown capable of finding the true +solution $x$ in time proportional to reading the data $\{(a_i;y_i)\}_{1\le i +\le m}$. This holds with high probability and without extra assumption on the +signal $x$ to be recovered, provided that the number $m$ of equations is some +constant $c>0$ times the number $n$ of unknowns in the signal vector, namely, +$m>cn$. Empirically, the upshots of this contribution are: i) (almost) $100\%$ +perfect signal recovery in the high-dimensional (say e.g., $n\ge 2,000$) regime +given only an information-theoretic limit number of noiseless equations, +namely, $m=2n-1$ in the real-valued Gaussian case; and, ii) (nearly) optimal +statistical accuracy in the presence of additive noise of bounded support. +Finally, substantial numerical tests using both synthetic data and real images +corroborate markedly improved signal recovery performance and computational +efficiency of our novel procedure relative to state-of-the-art approaches. +" +Predicting the outcomes of fuel drop impact on heated surfaces using SPH simulation," The impact of liquid drops on a heated solid surface is of great importance +in many engineering applications. This paper describes the simulation of the +drop-wall interaction using the smoothed particle hydrodynamics (SPH) method. +The SPH method is a Lagrangian mesh-free method that can be used to solve the +fluid equations. A vaporization model based on the SPH formulation was also +developed and implemented. A parametric study was conducted to characterize the +effects of impact velocity and wall temperature on the impact outcome. The +present numerical method was able to predict different outcomes, such as +deposition, splash, breakup, and rebound (i.e., Leidenfrost phenomenon). The +present numerical method was used to construct a regime diagram for describing +the impact of an iso-octane drop on a heated surface at various Weber numbers +and wall temperatures. +" +Learning in the Machine: the Symmetries of the Deep Learning Channel," In a physical neural system, learning rules must be local both in space and +time. In order for learning to occur, non-local information must be +communicated to the deep synapses through a communication channel, the deep +learning channel. We identify several possible architectures for this learning +channel (Bidirectional, Conjoined, Twin, Distinct) and six symmetry challenges: +1) symmetry of architectures; 2) symmetry of weights; 3) symmetry of neurons; +4) symmetry of derivatives; 5) symmetry of processing; and 6) symmetry of +learning rules. Random backpropagation (RBP) addresses the second and third +symmetry, and some of its variations, such as skipped RBP (SRBP) address the +first and the fourth symmetry. Here we address the last two desirable +symmetries showing through simulations that they can be achieved and that the +learning channel is particularly robust to symmetry variations. Specifically, +random backpropagation and its variations can be performed with the same +non-linear neurons used in the main input-output forward channel, and the +connections in the learning channel can be adapted using the same algorithm +used in the forward channel, removing the need for any specialized hardware in +the learning channel. Finally, we provide mathematical results in simple cases +showing that the learning equations in the forward and backward channels +converge to fixed points, for almost any initial conditions. In symmetric +architectures, if the weights in both channels are small at initialization, +adaptation in both channels leads to weights that are essentially symmetric +during and after learning. Biological connections are discussed. +" +Coset spaces of metrizable groups," We characterize coset spaces of topological groups which are coset spaces of +(separable) metrizable groups and complete metrizable (Polish) groups. Besides, +it is shown that for a $G$-space $X$ with a $d$-open action there is a +topological group $H$ of weight and cardinality less than or equal to the +weight of $X$ such that $H$ admits a $d$-open action on $X$. This is further +applied to show that if $X$ is a separable metrizable coset space then $X$ has +a Polish extension which is a coset space of a Polish group. +" +A Minimum Reconfiguration Probability Routing Algorithm for RWA in All-Optical Networks," In this paper, we present a detailed study of Minimum Reconfiguration +Probability Routing (MRPR) algorithm, and its performance evaluation in +comparison with Adaptive unconstrained routing (AUR) and Least Loaded routing +(LLR) algorithms. We have minimized the effects of failures on link and router +failure in the network under changing load conditions, we assess the +probability of service and number of light path failures due to link or route +failure on Wavelength Interchange(WI) network. The computation complexity is +reduced by using Kalman Filter(KF) techniques. The minimum reconfiguration +probability routing (MRPR) algorithm selects most reliable routes and assign +wavelengths to connections in a manner that utilizes the light path(LP) +established efficiently considering all possible requests. +" +Emergence of correlated proton tunneling in water ice," Several experimental and theoretical studies report instances of concerted or +correlated multiple proton tunneling in solid phases of water. Here, we +construct a pseudo-spin model for the quantum motion of protons in a hexameric +H$_2$O ring and extend it to open system dynamics that takes environmental +effects into account in the form of O$-$H stretch vibrations. We approach the +problem of correlations in tunneling using quantum information theory in a +departure from previous studies. Our formalism enables us to quantify the +coherent proton mobility around the hexagonal ring by one of the principal +measures of coherence, the $l_1$ norm of coherence. The nature of the pairwise +pseudo-spin correlations underlying the overall mobility is further +investigated within this formalism. We show that the classical correlations of +the individual quantum tunneling events in long-time limit is sufficient to +capture the behaviour of coherent proton mobility observed in low-temperature +experiments. We conclude that long-range intra-ring interactions do not appear +to be a necessary condition for correlated proton tunneling in water ice. +" +Constrained path-finding and structure from acyclicity," This note presents several results in graph theory inspired by the author's +work in the proof theory of linear logic; these results are purely +combinatorial and do not involve logic. +We show that trails avoiding forbidden transitions and rainbow paths for +complete multipartite color classes can be found in linear time, whereas +finding rainbow paths is NP-complete for any other restriction on color +classes. For the tractable cases, we also state new structural properties +equivalent to Kotzig's theorem on bridges in unique perfect matchings. +We also exhibit a connection between blossoms and bridge deletion orders in +unique perfect matchings. +" +Constraints on the optical polarization source in the luminous non-blazar quasar 3C 323.1 (PG 1545+210) from the photometric and polarimetric variability," We examine the optical photometric and polarimetric variability of the +luminous type 1 non-blazar quasar 3C 323.1 (PG 1545+210). Two optical +spectro-polarimetric measurements taken during the periods 1996$-$98 and 2003 +combined with a $V$-band imaging polarimetric measurement taken in 2002 reveal +that (1) as noted in the literature, the polarization of 3C 323.1 is confined +only to the continuum emission, that is, the emission from the broad line +region is unpolarized; (2) the polarized flux spectra show evidence of a +time-variable broad absorption feature in the wavelength range of the Balmer +continuum and other recombination lines; (3) weak variability in the +polarization position angle ($PA$) of $\sim$ 4 deg over a time-scale of 4$-$6 +years is observed; and (4) the V-band total flux and the polarized flux show +highly correlated variability over a time-scale of one year. Taking the +above-mentioned photometric and polarimetric variability properties and the +results from previous studies into consideration, we propose a geometrical +model for the polarization source in 3C 323.1, in which an equatorial absorbing +region and an axi-asymmetric equatorial electron-scattering region are assumed +to be located between the accretion disc and the broad line region. The +scattering/absorbing regions can perhaps be attributed to the accretion disc +wind or flared disc surface, but further polarimetric monitoring observations +for 3C~323.1 and other quasars with continuum-confined polarization are needed +to probe the true physical origins of these regions. +" +Cliff-Weiss Inequalities and the Zassenhaus Conjecture," Let $N$ be a nilpotent normal subgroup of the finite group $G$. Assume that +$u$ is a unit of finite order in the integral group ring $\mathbb{Z} G$ of $G$ +which maps to the identity under the linear extension of the natural +homomorphism $G \rightarrow G/N$. We show how a result of Cliff and Weiss can +be used to derive linear inequalities on the partial augmentations of $u$ and +apply this to the study of the Zassenhaus Conjecture. This conjecture states +that any unit of finite order in $\mathbb{Z} G$ is conjugate in the rational +group algebra of $G$ to an element in $\pm G$. +" +Flow Navigation by Smart Microswimmers via Reinforcement Learning," Smart active particles can acquire some limited knowledge of the fluid +environment from simple mechanical cues and exert a control on their preferred +steering direction. Their goal is to learn the best way to navigate by +exploiting the underlying flow whenever possible. As an example, we focus our +attention on smart gravitactic swimmers. These are active particles whose task +is to reach the highest altitude within some time horizon, given the +constraints enforced by fluid mechanics. By means of numerical experiments, we +show that swimmers indeed learn nearly optimal strategies just by experience. A +reinforcement learning algorithm allows particles to learn effective strategies +even in difficult situations when, in the absence of control, they would end up +being trapped by flow structures. These strategies are highly nontrivial and +cannot be easily guessed in advance. This Letter illustrates the potential of +reinforcement learning algorithms to model adaptive behavior in complex flows +and paves the way towards the engineering of smart microswimmers that solve +difficult navigation problems. +" +Synthesizing a Clock Signal with Reactions---Part II: Frequency Alteration Based on Gears," On a chassis of gear model, we have offered a quantitative description for +our method to synthesize a chemical clock signal with various duty cycles in +Part I. As Part II of the study, this paper devotes itself in proposing a +design methodology to handle frequency alteration issues for the chemical +clock, including both frequency division and frequency multiplication. Several +interesting examples are provided for a better explanation of our contribution. +All the simulation results verify and validate the correctness and efficiency +of our proposal. +" +A characterization of affinely regular polygons," In 1970, Coxeter gave a short and elegant geometric proof showing that if +$p_1, p_2, \ldots, p_n$ are vertices of an $n$-gon $P$ in cyclic order, then +$P$ is affinely regular if, and only if there is some $\lambda \geq 0$ such +that $p_{j+2}-p_{j-1} = \lambda (p_{j+1}-p_j)$ for $j=1,2,\ldots, n$. The aim +of this paper is to examine the properties of polygons whose vertices +$p_1,p_2,\ldots,p_n \in \mathbb{C}$ satisfy the property that +$p_{j+m_1}-p_{j+m_2} = w (p_{j+k}-p_j)$ for some $w \in \mathbb{C}$ and +$m_1,m_2,k \in \mathbb{Z}$. In particular, we show that in `most' cases this +implies that the polygon is affinely regular, but in some special cases there +are polygons which satisfy this property but are not affinely regular. The +proofs are based on the use of linear algebraic and number theoretic tools. In +addition, we apply our method to characterize polytopes with certain symmetry +groups. +" +On detection and annihilation of spherical virus embedded in a fluid matrix at low and moderate Reynolds number," Effect of high and low Reynolds number is studied on low frequency +vibrational modes of a spherical virus embedded in the aqueous medium. We have +used an analytical approach based on fluid dynamic and classical Lamb theory to +calculate the vibrational modes of a virus with material parameters of lysozyme +crystal in water. The obvious size effect on the vibrational modes is observed. +The estimated damping time which is of the order of picosecond varies with +Reynolds number and shows a high value for a critical Reynolds number. The +stationary eigenfrequency regions are observed for every quantum number l and n +suggesting the most probable Re ranges for acoustic treatment of viruses in +order to detect or annihilate the virus using corresponding viruswater +configuration. +" +Robust two-level system control by a detuned and chirped laser pulse," We propose and demonstrate a robust control scheme by ultrafast nonadiabatic +chirped laser pulse, designed for targeting coherent superpositions of +two-level systems. Robustness against power fluctuation is proved by our +numerical study and a proof-of-principle experiment performed with femtosecond +laser interaction on cold atoms. They exhibit for the final driven dynamics a +cusp on the Bloch sphere, corresponding to a zero curvature of fidelity. This +solution is particularly simple and thus applicable to a wide range of +potential applications. +" +Analyzing Diffusion and Flow-driven Instability using Semidefinite Programming," Diffusion and flow-driven instability, or transport-driven instability, is +one of the central mechanisms to generate inhomogeneous gradient of +concentrations in spatially distributed chemical systems. However, verifying +the transport-driven instability of reaction-diffusion-advection systems +requires checking the Jacobian eigenvalues of infinitely many Fourier modes, +which is computationally intractable. To overcome this limitation, this paper +proposes mathematical optimization algorithms that determine the +stability/instability of reaction-diffusion-advection systems by finite steps +of algebraic calculations. Specifically, the stability/instability analysis of +Fourier modes is formulated as a sum-of-squares (SOS) optimization program, +which is a class of convex optimization whose solvers are widely available as +software packages. The optimization program is further extended for facile +computation of the destabilizing spatial modes. This extension allows for +predicting and designing the shape of concentration gradient without simulating +the governing equations. The streamlined analysis process of self-organized +pattern formation is demonstrated with a simple illustrative reaction model +with diffusion and advection. +" +Petrographic and geochemical evidence for multiphase formation of carbonates in the Martian orthopyroxenite Allan Hills 84001," Martian meteorites can provide valuable information about past environmental +conditions on Mars. Allan Hills 84001 formed more than 4 Gyr ago, and owing to +its age and long exposure to the Martian environment, this meteorite has +features that may record early processes. These features include a highly +fractured texture, gases trapped during one or more impact events or during +formation of the rock, and spherical Fe-Mg-Ca carbonates. Here we have +concentrated on providing new insights into the context of these carbonates +using a range of techniques to explore whether they record multiple +precipitation and shock events. The petrographic features and compositional +properties of these carbonates indicate that at least two pulses of Mg- and +Fe-rich solutions saturated the rock. Those two generations of carbonates can +be distinguished by a very sharp change in compositions, from being rich in Mg +and poor in Fe and Mn, to being poor in Mg and rich in Fe and Mn. Between these +two generations of carbonate is evidence for fracturing and local corrosion. +" +Distributed Random-Fixed Projected Algorithm for Constrained Optimization Over Digraphs," This paper is concerned with a constrained optimization problem over a +directed graph (digraph) of nodes, in which the cost function is a sum of local +objectives, and each node only knows its local objective and constraints. To +collaboratively solve the optimization, most of the existing works require the +interaction graph to be balanced or ""doubly-stochastic"", which is quite +restrictive and not necessary as shown in this paper. We focus on an epigraph +form of the original optimization to resolve the ""unbalanced"" problem, and +design a novel two-step recursive algorithm with a simple structure. Under +strongly connected digraphs, we prove that each node asymptotically converges +to some common optimal solution. Finally, simulations are performed to +illustrate the effectiveness of the proposed algorithms. +" +Detecting Cancer Metastases on Gigapixel Pathology Images," Each year, the treatment decisions for more than 230,000 breast cancer +patients in the U.S. hinge on whether the cancer has metastasized away from the +breast. Metastasis detection is currently performed by pathologists reviewing +large expanses of biological tissues. This process is labor intensive and +error-prone. We present a framework to automatically detect and localize tumors +as small as 100 x 100 pixels in gigapixel microscopy images sized 100,000 x +100,000 pixels. Our method leverages a convolutional neural network (CNN) +architecture and obtains state-of-the-art results on the Camelyon16 dataset in +the challenging lesion-level tumor detection task. At 8 false positives per +image, we detect 92.4% of the tumors, relative to 82.7% by the previous best +automated approach. For comparison, a human pathologist attempting exhaustive +search achieved 73.2% sensitivity. We achieve image-level AUC scores above 97% +on both the Camelyon16 test set and an independent set of 110 slides. In +addition, we discover that two slides in the Camelyon16 training set were +erroneously labeled normal. Our approach could considerably reduce false +negative rates in metastasis detection. +" +Edge insulating topological phases in a two-dimensional long-range superconductor," We study the zero-temperature phase diagram of a two dimensional square +lattice loaded by spinless fermions, with nearest neighbor hopping and +algebraically decaying pairing. We find that for sufficiently long-range +pairing, new phases, not continuously connected with any short-range phase, +occur, signaled by the violation of the area law for the Von Neumann entropy, +by semi-integer Chern numbers, and by edge modes with nonzero mass. The latter +feature results in the absence of single-fermion edge conductivity, present +instead in the short- range limit. The definition of a topology in the bulk and +the presence of a bulk-boundary correspondence is still suggested for the +long-range phases. Recent experimental proposals and advances open the +stimulating possibility to probe the described long-range effects in +next-future realistic set-ups. +" +Using mobile service for supply chain management : a survey and challenges," Efficient supply chain management calls for robust analytical and optimal +models to automate its process. Therefore, information technology is an +essential ingredient that integrates these tools in supply chain. With the +emergence of wireless, the high technologies and the reliability of mobile +devices, mobile web services draw a promising horizon facing economic +challenges. They offer new personalized services to each actor in the supply +chain on their mobile devices at anytime and anywhere. This paper presents a +literature review of mobile web service implemented on the industry context +based on the supply chain management approach. First, a large definition of +mobile web service and some proposal architecture are exposed. Then the paper +discuss some generic related work on mobile web service focusing on supply +chain management. Finally some challenges on m-service oriented supply chain +management are proposed. +" +Statistical distribution of roots of a polynomial modulo primes," Let $f(x)=x^n+a_{n-1}x^{n-1}+\dots+a_0$ be an irreducible polynomial with +integer coefficients. For a prime $p$ for which $f(x)$ is fully splitting +modulo $ p$, we consider $n$ roots $r_i$ of $f(x)\equiv 0\bmod p$ with $0 \le +r_1\le\dots\le r_n2.18K using a local anemometer. This +temperature range spans relative densities of superfluid from 96% down to 0%, +allowing to test numerical predictions of enhancement or depletion of +intermittency at intermediate superfluid fractions. Using the so-called +extended self-similarity method, scaling exponents of structure functions have +been calculated. No evidence of temperature dependence is found on these +scaling exponents in the upper part of the inertial cascade, where turbulence +is well developed and fully resolved by the probe. This result supports the +picture of a profound analogy between classical and quantum turbulence in their +inertial range, including the violation of self-similarities associated with +inertial-range intermittency. +" +Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees," Deep Reinforcement Learning (DRL) has achieved impressive success in many +applications. A key component of many DRL models is a neural network +representing a Q function, to estimate the expected cumulative reward following +a state-action pair. The Q function neural network contains a lot of implicit +knowledge about the RL problems, but often remains unexamined and +uninterpreted. To our knowledge, this work develops the first mimic learning +framework for Q functions in DRL. We introduce Linear Model U-trees (LMUTs) to +approximate neural network predictions. An LMUT is learned using a novel +on-line algorithm that is well-suited for an active play setting, where the +mimic learner observes an ongoing interaction between the neural net and the +environment. Empirical evaluation shows that an LMUT mimics a Q function +substantially better than five baseline methods. The transparent tree structure +of an LMUT facilitates understanding the network's learned knowledge by +analyzing feature influence, extracting rules, and highlighting the +super-pixels in image inputs. +" +Solution to dynamic economic dispatch with prohibited operating zones via MILP," Dynamic economic dispatch (DED) problem considering prohibited operating +zones (POZ), ramp rate constraints, transmission losses and spinning reserve +constraints is a complicated non-linear problem which is difficult to solve +efficiently. In this paper, a mixed integer linear programming (MILP) method is +proposed to solve such a DED problem. Firstly, a novel MILP formulation for DED +problem without considering the transmission losses, denoted by MILP-1, is +presented by using perspective cut reformulation technique. When the +transmission losses are considered, the quadratic terms in the transmission +losses are replaced by their first order Taylor expansions, and then an MILP +formulation for DED considering the transmission losses, denoted by MILP-2, is +obtained. Based on MILP-1 and MILP-2, an MILP-iteration algorithm (MILP-IA) is +proposed to solve the complicated DED problem. The effectiveness of the MILP-1 +and MILP-IA are assessed by several cases and the simulation results show that +both of them can solve to competitive solutions in a short time. +" +Efficient Bayesian Inference of Sigmoidal Gaussian Cox Processes," We present an approximate Bayesian inference approach for estimating the +intensity of a inhomogeneous Poisson process, where the intensity function is +modelled using a Gaussian process (GP) prior via a sigmoid link function. +Augmenting the model using a latent marked Poisson process and Pólya--Gamma +random variables we obtain a representation of the likelihood which is +conjugate to the GP prior. We approximate the posterior using a free--form mean +field approximation together with the framework of sparse GPs. Furthermore, as +alternative approximation we suggest a sparse Laplace approximation of the +posterior, for which an efficient expectation--maximisation algorithm is +derived to find the posterior's mode. Results of both algorithms compare well +with exact inference obtained by a Markov Chain Monte Carlo sampler and +standard variational Gauss approach, while being one order of magnitude faster. +" +Perturbation analysis of sub/super hedging problems," We investigate the links between various no-arbitrage conditions and the +existence of pricing functionals in general markets, and prove the Fundamental +Theorem of Asset Pricing therein. No-arbitrage conditions, either in this +abstract setting or in the case of a market consisting of European Call +options, give rise to duality properties of infinite-dimensional sub- and +super-hedging problems. With a view towards applications, we show how duality +is preserved when reducing these problems over finite-dimensional bases. We +finally perform a rigorous perturbation analysis of those linear programming +problems, and highlight, as a numerical example, the influence of smile +extrapolation on the bounds of exotic options. +" +Reciprocal Translation between SAR and Optical Remote Sensing Images with Cascaded-Residual Adversarial Networks," Despite the advantages of all-weather and all-day high-resolution imaging, +synthetic aperture radar (SAR) images are much less viewed and used by general +people because human vision is not adapted to microwave scattering phenomenon. +However, expert interpreters can be trained by comparing side-by-side SAR and +optical images to learn the mapping rules from SAR to optical. This paper +attempts to develop machine intelligence that are trainable with large-volume +co-registered SAR and optical images to translate SAR image to optical version +for assisted SAR image interpretation. Reciprocal SAR-Optical image translation +is a challenging task because it is raw data translation between two physically +very different sensing modalities. This paper proposes a novel reciprocal +adversarial network scheme where cascaded residual connections and hybrid +L1-GAN loss are employed. It is trained and tested on both spaceborne GF-3 and +airborne UAVSAR images. Results are presented for datasets of different +resolutions and polarizations and compared with other state-of-the-art methods. +The FID is used to quantitatively evaluate the translation performance. The +possibility of unsupervised learning with unpaired SAR and optical images is +also explored. Results show that the proposed translation network works well +under many scenarios and it could potentially be used for assisted SAR +interpretation. +" +Molecular Computing for Markov Chains," In this paper, it is presented a methodology for implementing arbitrarily +constructed time-homogenous Markov chains with biochemical systems. Not only +discrete but also continuous-time Markov chains are allowed to be computed. By +employing chemical reaction networks (CRNs) as a programmable language, +molecular concentrations serve to denote both input and output values. One +reaction network is elaborately designed for each chain. The evolution of +species' concentrations over time well matches the transient solutions of the +target continuous-time Markov chain, while equilibrium concentrations can +indicate the steady state probabilities. Additionally, second-order Markov +chains are considered for implementation, with bimolecular reactions rather +that unary ones. An original scheme is put forward to compile unimolecular +systems to DNA strand displacement reactions for the sake of future physical +implementations. Deterministic, stochastic and DNA simulations are provided to +enhance correctness, validity and feasibility. +" +Theory of ground states for classical Heisenberg spin systems III," We extend the theory of ground states of classical Heisenberg spin systems +published previously by a closer investigation of the convex Gram set of $N$ +spins. This is relevant for the present purpose since the set of ground states +of a given spin system corresponds to a face of Gram set. The investigation of +the Gram set is facilitated by the determination of its symmetry group. The +case of the general spin triangle is completely solved and illustrated by a +couple of examples. +" +Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks," Recurrent neural networks have gained widespread use in modeling sequence +data across various domains. While many successful recurrent architectures +employ a notion of gating, the exact mechanism that enables such remarkable +performance is not well understood. We develop a theory for signal propagation +in recurrent networks after random initialization using a combination of mean +field theory and random matrix theory. To simplify our discussion, we introduce +a new RNN cell with a simple gating mechanism that we call the minimalRNN and +compare it with vanilla RNNs. Our theory allows us to define a maximum +timescale over which RNNs can remember an input. We show that this theory +predicts trainability for both recurrent architectures. We show that gated +recurrent networks feature a much broader, more robust, trainable region than +vanilla RNNs, which corroborates recent experimental findings. Finally, we +develop a closed-form critical initialization scheme that achieves dynamical +isometry in both vanilla RNNs and minimalRNNs. We show that this results in +significantly improvement in training dynamics. Finally, we demonstrate that +the minimalRNN achieves comparable performance to its more complex +counterparts, such as LSTMs or GRUs, on a language modeling task. +" +Foundation of relativistic astrophysics: Curvature of Riemannian Space versus Relativistic Quantum Field in Minkowski Space," The common basis for many high energy astrophysical phenomena is the theory +of gravitation, for which in modern theoretical physics there are two main +directions: Einstein's geometrical and Feynman's nonmetric field approaches for +description of gravitational interaction. Though classical relativistic effects +have the same values in both approaches, there are dramatically different +effects predicted by GRT and FGT for relativistic astrophysics. Crucial +observational tests which allow to test the physics of the gravitational +interaction are discussed, including detection of gravitational waves by +advanced LIGO-Virgo antennas and Event Horizon Telescope observations of active +galactic nuclei. Forthcoming relativistic astrophysics can open new ways in +understanding of gravitation and the unification of fundamental physical +interaction. +" +Consistency of survival tree and forest models: splitting bias and correction," Random survival forest and survival trees are popular models in statistics +and machine learning. However, there is a lack of general understanding +regarding consistency, splitting rules and influence of the censoring +mechanism. In this paper, we investigate the statistical properties of existing +methods from several interesting perspectives. First, we show that traditional +splitting rules with censored outcomes rely on a biased estimation of the +within-node failure distribution. To exactly quantify this bias, we develop a +concentration bound of the within-node estimation based on non i.i.d. samples +and apply it to the entire forest. Second, we analyze the entanglement between +the failure and censoring distributions caused by univariate splits, and show +that without correcting the bias at an internal node, survival tree and forest +models can still enjoy consistency under suitable conditions. In particular, we +demonstrate this property under two cases: a finite-dimensional case where the +splitting variables and cutting points are chosen randomly, and a +high-dimensional case where the covariates are weakly correlated. Our results +can also degenerate into an independent covariate setting, which is commonly +used in the random forest literature for high-dimensional sparse models. +However, it may not be avoidable that the convergence rate depends on the total +number of variables in the failure and censoring distributions. Third, we +propose a new splitting rule that compares bias-corrected cumulative hazard +functions at each internal node. We show that the rate of consistency of this +new model depends only on the number of failure variables, which improves from +non-bias-corrected versions. We perform simulation studies to confirm that this +can substantially benefit the prediction error. +" +Missing Data and Prediction," Missing data are a common problem for both the construction and +implementation of a prediction algorithm. Pattern mixture kernel submodels +(PMKS) - a series of submodels for every missing data pattern that are fit +using only data from that pattern - are a computationally efficient remedy for +both stages. Here we show that PMKS yield the most predictive algorithm among +all standard missing data strategies. Specifically, we show that the expected +loss of a forecasting algorithm is minimized when each pattern-specific loss is +minimized. Simulations and a re-analysis of the SUPPORT study confirms that +PMKS generally outperforms zero-imputation, mean-imputation, complete-case +analysis, complete-case submodels, and even multiple imputation (MI). The +degree of improvement is highly dependent on the missingness mechanism and the +effect size of missing predictors. When the data are Missing at Random (MAR) MI +can yield comparable forecasting performance but generally requires a larger +computational cost. We see that predictions from the PMKS are equivalent to the +limiting predictions for a MI procedure that uses a mean model dependent on +missingness indicators (the MIMI model). Consequently, the MIMI model can be +used to assess the MAR assumption in practice. The focus of this paper is on +out-of-sample prediction behavior, implications for model inference are only +briefly explored. +" +Multidimensional Urban Segregation - Toward A Neural Network Measure," We introduce a multidimensional, neural-network approach to reveal and +measure urban segregation phenomena, based on the Self-Organizing Map algorithm +(SOM). The multidimensionality of SOM allows one to apprehend a large number of +variables simultaneously, defined on census or other types of statistical +blocks, and to perform clustering along them. Levels of segregation are then +measured through correlations between distances on the neural network and +distances on the actual geographical map. Further, the stochasticity of SOM +enables one to quantify levels of heterogeneity across census blocks. We +illustrate this new method on data available for the city of Paris. +" +Reconstructing networks with unknown and heterogeneous errors," The vast majority of network datasets contains errors and omissions, although +this is rarely incorporated in traditional network analysis. Recently, an +increasing effort has been made to fill this methodological gap by developing +network reconstruction approaches based on Bayesian inference. These +approaches, however, rely on assumptions of uniform error rates and on direct +estimations of the existence of each edge via repeated measurements, something +that is currently unavailable for the majority of network data. Here we develop +a Bayesian reconstruction approach that lifts these limitations by not only +allowing for heterogeneous errors, but also for single edge measurements +without direct error estimates. Our approach works by coupling the inference +approach with structured generative network models, which enable the +correlations between edges to be used as reliable uncertainty estimates. +Although our approach is general, we focus on the stochastic block model as the +basic generative process, from which efficient nonparametric inference can be +performed, and yields a principled method to infer hierarchical community +structure from noisy data. We demonstrate the efficacy of our approach with a +variety of empirical and artificial networks. +" +Distributed Active State Estimation with User-Specified Accuracy," In this paper, we address the problem of controlling a network of mobile +sensors so that a set of hidden states are estimated up to a user-specified +accuracy. The sensors take measurements and fuse them online using an +Information Consensus Filter (ICF). At the same time, the local estimates guide +the sensors to their next best configuration. This leads to an LMI-constrained +optimization problem that we solve by means of a new distributed random +approximate projections method. The new method is robust to the state +disagreement errors that exist among the robots as the ICF fuses the collected +measurements. Assuming that the noise corrupting the measurements is zero-mean +and Gaussian and that the robots are self localized in the environment, the +integrated system converges to the next best positions from where new +observations will be taken. This process is repeated with the robots taking a +sequence of observations until the hidden states are estimated up to the +desired user-specified accuracy. We present simulations of sparse landmark +localization, where the robotic team achieves the desired estimation tolerances +while exhibiting interesting emergent behavior. +" +The Generalized Nagell-Ljunggren Problem: Powers with Repetitive Representations," We consider a natural generalization of the Nagell-Ljunggren equation to the +case where the qth power of an integer y, for q >= 2, has a base-b +representation that consists of a length-l block of digits repeated n times, +where n >= 2. Assuming the abc conjecture of Masser and Oesterlé, we +completely characterize those triples (q, n, l) for which there are infinitely +many solutions b. In all cases predicted by the abc conjecture, we are able +(without any assumptions) to prove there are indeed infinitely many solutions. +" +Observing the sky at extremely high energies with the Cherenkov Telescope Array: Status of the GCT project," The Cherenkov Telescope Array is the main global project of ground-based +gamma-ray astronomy for the coming decades. Performance will be significantly +improved relative to present instruments, allowing a new insight into the +high-energy Universe [1]. The nominal CTA southern array will include a +sub-array of seventy 4 m telescopes spread over a few square kilometers to +study the sky at extremely high energies, with the opening of a new window in +the multi-TeV energy range. The Gamma-ray Cherenkov Telescope (GCT) is one of +the proposed telescope designs for that sub-array. The GCT prototype recorded +its first Cherenkov light on sky in 2015. After an assessment phase in 2016, +new observations have been performed successfully in 2017. The GCT +collaboration plans to install its first telescopes and cameras on the CTA site +in Chile in 2018-2019 and to contribute a number of telescopes to the +subsequent CTA production phase. +" +"Light bending, static dark energy and related uniqueness of Schwarzschild-de Sitter spacetime"," Since the Schwarzschild-de Sitter spacetime is static inside the cosmological +event horizon, if the dark energy state parameter is sufficiently close to +$-1$, apparently one could still expect an effectively static geometry, in the +attraction dominated region inside the maximum turn around radius, $R_{\rm TA, +max}$, of a cosmic structure. We take the first order metric derived recently +assuming a static and ideal dark energy fluid with equation of state +$P(r)=\alpha\rho(r)$ as a source in Ref. [1], which reproduced the expression +for $R_{\rm TA, max}$ found earlier in the cosmological McVittie spacetime. +Here we show that the equality originates from the equivalence of geodesic +motion in these two backgrounds, in the non-relativistic regime. We extend this +metric up to the third order and compute the bending of light using the +Rindler-Ishak method. For $ \alpha\neq -1$, a dark energy dependent term +appears in the bending equation, unlike the case of the cosmological constant, +$\alpha=-1$. Due to this new term in particular, existing data for the light +bending at galactic scales yields, $(1+\alpha)\lesssim {\cal O}(10^{-14})$, +thereby practically ruling out any such static and inhomogeneous dark energy +fluid we started with. Implication of this result pertaining the uniqueness of +the Schwarzschild-de Sitter spacetime in such inhomogeneous dark energy +background is discussed. +" +Free multivariate w*-semicrossed products: reflexivity and the bicommutant property," We study w*-semicrossed products over actions of the free semigroup and the +free abelian semigroup on (possibly non-selfadjoint) w*-closed algebras. We +show that they are reflexive when the dynamics are implemented by uniformly +bounded families of invertible row operators. Combining with results of Helmer +we derive that w*-semicrossed products over factors (on a separable Hilbert +space) are reflexive. Furthermore we show that w*-semicrossed products of +automorphic actions on maximal abelian selfadjoint algebras are reflexive. In +all cases we prove that the w*-semicrossed products have the bicommutant +property if and only if so does the ambient algebra of the dynamics. +" +Computation and theory of Euler sums of generalized hyperharmonic numbers," Recently, Dil and Boyadzhiev \cite{AD2015} proved an explicit formula for the +sum of multiple harmonic numbers whose indices are the sequence $\left( +{{\left\{ 0 \right\}_r},1} \right)$. In this paper we show that the sums of +multiple harmonic numbers whose indices are the sequence $\left( {{\left\{ 0 +\right\}_r,1};{\left\{ 1 \right\}_{k-1}}} \right)$ can be expressed in terms +of (multiple) zeta values, multiple harmonic numbers and Stirling numbers of +the first kind, and give an explicit formula. +" +A Lightweight Music Texture Transfer System," Deep learning researches on the transformation problems for image and text +have raised great attention. However, present methods for music feature +transfer using neural networks are far from practical application. In this +paper, we initiate a novel system for transferring the texture of music, and +release it as an open source project. Its core algorithm is composed of a +converter which represents sounds as texture spectra, a corresponding +reconstructor and a feed-forward transfer network. We evaluate this system from +multiple perspectives, and experimental results reveal that it achieves +convincing results in both sound effects and computational performance. +" +Assessment of Bayesian Expected Power via Bayesian Bootstrap," The Bayesian expected power (BEP) has become increasingly popular in sample +size determination and assessment of the probability of success (POS) for a +future trial. The BEP takes into consideration the uncertainty around the +parameters assumed by a power analysis and is thus more robust compared to the +traditional power that assumes a single set of parameters. Current methods for +assessing BEP are often based in a parametric framework by imposing a model on +the pilot data to derive and sample from the posterior distributions of the +parameters. Implementation of the model-based approaches can be analytically +challenging and computationally costly especially for multivariate data sets; +it also runs the risk of generating misleading BEP if the model is +mis-specified. We propose an approach based on the Bayesian bootstrap technique +(BBS) to simulate future trials in the presence of individual-level pilot data, +based on which the empirical BEP can be calculated. The BBS approach is +model-free with no assumptions about the distribution of the prior data and +circumvents the analytical and computational complexity associated with +obtaining the posterior distribution of the parameters. Information from +multiple pilot studies is also straightforward to combine. We also propose the +double bootstrap (BS2), a frequentist counterpart to the BBS, that shares +similar properties and achieves the same goal as the BBS for BEP assessment. +Simulation studies and case studies are presented to demonstrate the +implementation of the BBS and BS2 techniques and to compare the BEP results +with model-based approaches. +" +The EBLM Project IV. Spectroscopic orbits of over 100 eclipsing M dwarfs masquerading as transiting hot-Jupiters," We present 2271 radial velocity measurements taken on 118 single-line binary +stars, taken over eight years with the CORALIE spectrograph. The binaries +consist of F/G/K primaries and M-dwarf secondaries. They were initially +discovered photometrically by the WASP planet survey, as their shallow eclipses +mimic a hot-Jupiter transit. The observations we present permit a precise +characterisation of the binary orbital elements and mass function. With +modelling of the primary star this mass function is converted to a mass of the +secondary star. In the future, this spectroscopic work will be combined with +precise photometric eclipses to draw an empirical mass/radius relation for the +bottom of the mass sequence. This has applications in both stellar astrophysics +and the growing number of exoplanet surveys around M-dwarfs. In particular, we +have discovered 34 systems with a secondary mass below $0.2 M_\odot$, and so we +will ultimately double the known number of very low-mass stars with well +characterised mass and radii. +We are able to detect eccentricities as small as 0.001 and orbital periods to +sub-second precision. Our sample can revisit some earlier work on the tidal +evolution of close binaries, extending it to low mass ratios. We find some +binaries that are eccentric at orbital periods < 3 days, while our longest +circular orbit has a period of 10.4 days. +By collating the EBLM binaries with published WASP planets and brown dwarfs, +we derive a mass spectrum with twice the resolution of previous work. We +compare the WASP/EBLM sample of tightly-bound orbits with work in the +literature on more distant companions up to 10 AU. We note that the brown dwarf +desert appears wider, as it carves into the planetary domain for our +short-period orbits. This would mean that a significantly reduced abundance of +planets begins at $\sim 3M_{\rm Jup}$, well before the Deuterium-burning limit. +[abridged] +" +Causal Inference from Observational Studies with Clustered Interference," Inferring causal effects from an observational study is challenging because +participants are not randomized to treatment. Observational studies in +infectious disease research present the additional challenge that one +participant's treatment may affect another participant's outcome, i.e., there +may be interference. In this paper recent approaches to defining causal effects +in the presence of interference are considered, and new causal estimands +designed specifically for use with observational studies are proposed. +Previously defined estimands target counterfactual scenarios in which +individuals independently select treatment with equal probability. However, in +settings where there is interference between individuals within clusters, it +may be unlikely that treatment selection is independent between individuals in +the same cluster. The proposed causal estimands instead describe counterfactual +scenarios in which the treatment selection correlation structure is the same as +in the observed data distribution, allowing for within-cluster dependence in +the individual treatment selections. These estimands may be more relevant for +policy-makers or public health officials who desire to quantify the effect of +increasing the proportion of treated individuals in a population. Inverse +probability-weighted estimators for these estimands are proposed. The +large-sample properties of the estimators are derived, and a simulation study +demonstrating the finite-sample performance of the estimators is presented. The +proposed methods are illustrated by analyzing data from a study of cholera +vaccination in over 100,000 individuals in Bangladesh. +" +Sequential Dynamic Decision Making with Deep Neural Nets on a Test-Time Budget," Deep neural network (DNN) based approaches hold significant potential for +reinforcement learning (RL) and have already shown remarkable gains over +state-of-art methods in a number of applications. The effectiveness of DNN +methods can be attributed to leveraging the abundance of supervised data to +learn value functions, Q-functions, and policy function approximations without +the need for feature engineering. Nevertheless, the deployment of DNN-based +predictors with very deep architectures can pose an issue due to computational +and other resource constraints at test-time in a number of applications. We +propose a novel approach for reducing the average latency by learning a +computationally efficient gating function that is capable of recognizing states +in a sequential decision process for which policy prescriptions of a shallow +network suffices and deeper layers of the DNN have little marginal utility. The +overall system is adaptive in that it dynamically switches control actions +based on state-estimates in order to reduce average latency without sacrificing +terminal performance. We experiment with a number of alternative loss-functions +to train gating functions and shallow policies and show that in a number of +applications a speed-up of up to almost 5X can be obtained with little loss in +performance. +" +Linear Representation of Symmetric Games," Using semi-tensor product of matrices, the structures of several kinds of +symmetric games are investigated via the linear representation of symmetric +group in the structure vector of games as its representation space. First of +all, the symmetry, described as the action of symmetric group on payoff +functions, is converted into the product of permutation matrices with structure +vectors of payoff functions. Using the linear representation of the symmetric +group in structure vectors, the algebraic conditions for the ordinary, +weighted, renaming and name-irrelevant symmetries are obtained respectively as +the invariance under the corresponding linear representations. Secondly, using +the linear representations the relationship between symmetric games and +potential games is investigated. This part is mainly focused on Boolean games. +An alternative proof is given to show that ordinary, renaming and weighted +symmetric Boolean games are also potential ones under our framework. The +corresponding potential functions are also obtained. Finally, an example is +given to show that some other Boolean games could also be potential games. +" +Snorkel: Rapid Training Data Creation with Weak Supervision," Labeling training data is increasingly the largest bottleneck in deploying +machine learning systems. We present Snorkel, a first-of-its-kind system that +enables users to train state-of-the-art models without hand labeling any +training data. Instead, users write labeling functions that express arbitrary +heuristics, which can have unknown accuracies and correlations. Snorkel +denoises their outputs without access to ground truth by incorporating the +first end-to-end implementation of our recently proposed machine learning +paradigm, data programming. We present a flexible interface layer for writing +labeling functions based on our experience over the past year collaborating +with companies, agencies, and research labs. In a user study, subject matter +experts build models 2.8x faster and increase predictive performance an average +45.5% versus seven hours of hand labeling. We study the modeling tradeoffs in +this new setting and propose an optimizer for automating tradeoff decisions +that gives up to 1.8x speedup per pipeline execution. In two collaborations, +with the U.S. Department of Veterans Affairs and the U.S. Food and Drug +Administration, and on four open-source text and image data sets representative +of other deployments, Snorkel provides 132% average improvements to predictive +performance over prior heuristic approaches and comes within an average 3.60% +of the predictive performance of large hand-curated training sets. +" +Efficient assembly based on B-spline tailored quadrature rules for the IgA-SGBEM," This paper deals with the discrete counterpart of 2D elliptic model problems +rewritten in terms of Boundary Integral Equations. The study is done within the +framework of Isogeometric Analysis based on B-splines. In such a context, the +problem of constructing appropriate, accurate and efficient quadrature rules +for the Symmetric Galerkin Boundary Element Method is here investigated. The +new integration schemes, together with row assembly and sum factorization, are +used to build a more efficient strategy to derive the final linear system of +equations. Key ingredients are weighted quadrature rules tailored for +B--splines, that are constructed to be exact in the whole test space, also with +respect to the singular kernel. Several simulations are presented and +discussed, showing accurate evaluation of the involved integrals and outlining +the superiority of the new approach in terms of computational cost and elapsed +time with respect to the standard element-by-element assembly. +" +Perceptual audio loss function for deep learning," PESQ and POLQA , are standards are standards for automated assessment of +voice quality of speech as experienced by human beings. The predictions of +those objective measures should come as close as possible to subjective quality +scores as obtained in subjective listening tests. Wavenet is a deep neural +network originally developed as a deep generative model of raw audio +wave-forms. Wavenet architecture is based on dilated causal convolutions, which +exhibit very large receptive fields. In this short paper we suggest using the +Wavenet architecture, in particular its large receptive filed in order to learn +PESQ algorithm. By doing so we can use it as a differentiable loss function for +speech enhancement. +" +From Complex Event Processing to Simple Event Processing," Many problems in Computer Science can be framed as the computation of queries +over sequences, or ""streams"" of data units called events. The field of Complex +Event Processing (CEP) relates to the techniques and tools developed to +efficiently process these queries. However, most CEP systems developed so far +have concentrated on relatively narrow types of queries, which consist of +sliding windows, aggregation functions, and simple sequential patterns computed +over events that have a fixed tuple structure. Many of them boast throughput, +but in counterpart, they are difficult to setup and cumbersome to extend with +user-defined elements. +This paper describes a variety of use cases taken from real-world scenarios +that present features seldom considered in classical CEP problems. It also +provides a broad review of current solutions, that includes tools and +techniques going beyond typical surveys on CEP. From a critical analysis of +these solutions, design principles for a new type of event stream processing +system are exposed. The paper proposes a simple, generic and extensible +framework for the processing of event streams of diverse types; it describes in +detail a stream processing engine, called BeepBeep, that implements these +principles. BeepBeep's modular architecture, which borrows concepts from many +other systems, is complemented with an extensible query language, called eSQL. +The end result is an open, versatile, and reasonably efficient query engine +that can be used in situations that go beyond the capabilities of existing +systems. +" +Differentially Private Empirical Risk Minimization with Input Perturbation," We propose a novel framework for the differentially private ERM, input +perturbation. Existing differentially private ERM implicitly assumed that the +data contributors submit their private data to a database expecting that the +database invokes a differentially private mechanism for publication of the +learned model. In input perturbation, each data contributor independently +randomizes her/his data by itself and submits the perturbed data to the +database. We show that the input perturbation framework theoretically +guarantees that the model learned with the randomized data eventually satisfies +differential privacy with the prescribed privacy parameters. At the same time, +input perturbation guarantees that local differential privacy is guaranteed to +the server. We also show that the excess risk bound of the model learned with +input perturbation is $O(1/n)$ under a certain condition, where $n$ is the +sample size. This is the same as the excess risk bound of the state-of-the-art. +" +Empirical Distribution of Scaled Eigenvalues for Product of Matrices from the Spherical Ensemble," Consider the product of $m$ independent $n\times n$ random matrices from the +spherical ensemble for $m\ge 1$. The empirical distribution based on the $n$ +eigenvalues of the product is called the empirical spectral distribution. Two +recent papers by Götze, Kösters and Tikhomirov (2015) and Zeng (2016) +obtain the limit of the empirical spectral distribution for the product when +$m$ is a fixed integer. In this paper, we investigate the limiting empirical +distribution of scaled eigenvalues for the product of $m$ independent matrices +from the spherical ensemble in the case when $m$ changes with $n$, that is, +$m=m_n$ is an arbitrary sequence of positive integers. +" +Pulsed high magnetic field measurement via a Rubidium vapor sensor," We present a new technique to measure pulsed magnetic fields based on the use +of Rubidium in gas phase as a metrological standard. We have therefore +developed an instrument based on laser inducing transitions at about 780~nm (D2 +line) in a Rubidium gas contained in a mini-cell of 3~mm~x~3~mm cross section. +To be able to insert such a cell in a standard high field pulsed magnet we have +realized a fibred probe kept at a fixed temperature. Transition frequencies for +both the $\pi$ (light polarization parallel to the magnetic field) and $\sigma$ +(light polarization perpendicular to the magnetic field) configurations are +measured by a commercial wavemeter. One innovation of our sensor is that in +addition of monitoring the light transmitted by the Rb cell, which is usual, we +also monitor the fluorescence emission of the gas sample from a very small +volume with the advantage of reducing the impact of the field inhomogeneity on +the field measurement. Our sensor has been tested up to about 58~T. +" +A Massive Shell of Supernova-formed Dust in SNR G54.1+0.3," While theoretical dust condensation models predict that most refractory +elements produced in core-collapse supernovae (SNe) efficiently condense into +dust, a large quantity of dust has so far only been observed in SN 1987A. We +present the analysis of Spitzer Space Telescope, Herschel Space Observatory, +Stratospheric Observatory for Infrared Astronomy (SOFIA), and AKARI +observations of the infrared (IR) shell surrounding the pulsar wind nebula in +the supernova remnant G54.1+0.3. We attribute a distinctive spectral feature at +21 $\mu$m to a magnesium silicate grain species that has been invoked in +modeling the ejecta-condensed dust in Cas A, which exhibits the same spectral +signature. If this species is responsible for producing the observed spectral +feature and accounts for a significant fraction of the observed IR continuum, +we find that it would be the dominant constituent of the dust in G54.1+0.3, +with possible secondary contributions from other compositions, such as carbon, +silicate, or alumina grains. The smallest mass of SN-formed dust required by +our models is 1.1 $\pm$ 0.8 $\rm M_{\odot}$. We discuss how these results may +be affected by varying dust grain properties and self-consistent grain heating +models. The spatial distribution of the dust mass and temperature in G54.1+0.3 +confirms the scenario in which the SN-formed dust has not yet been processed by +the SN reverse shock and is being heated by stars belonging to a cluster in +which the SN progenitor exploded. The dust mass and composition suggest a +progenitor mass of 16$-$27 $\rm M_{\odot}$ and imply a high dust condensation +efficiency, similar to that found for Cas A and SN 1987A. The study provides +another example of significant dust formation in a Type IIP SN and sheds light +on the properties of pristine SN-condensed dust. +" +A Purely Functional Computer Algebra System Embedded in Haskell," We demonstrate how methods in Functional Programming can be used to implement +a computer algebra system. As a proof-of-concept, we present the +computational-algebra package. It is a computer algebra system implemented as +an embedded domain-specific language in Haskell, a purely functional +programming language. Utilising methods in functional programming and prominent +features of Haskell, this library achieves safety, composability, and +correctness at the same time. To demonstrate the advantages of our approach, we +have implemented advanced Gröbner basis algorithms, such as Faugère's +$F_4$ and $F_5$, in a composable way. +" +Weight recursions for any rotation symmetric Boolean functions," Let $f_n(x_1, x_2, \ldots, x_n)$ denote the algebraic normal form (polynomial +form) of a rotation symmetric Boolean function of degree $d$ in $n \geq d$ +variables and let $wt(f_n)$ denote the Hamming weight of this function. Let +$(1, a_2, \ldots, a_d)_n$ denote the function $f_n$ of degree $d$ in $n$ +variables generated by the monomial $x_1x_{a_2} \cdots x_{a_d}.$ Such a +function $f_n$ is called {\em monomial rotation symmetric} (MRS). It was proved +in a $2012$ paper that for any MRS $f_n$ with $d=3,$ the sequence of weights +$\{w_k = wt(f_k):~k = 3, 4, \ldots\}$ satisfies a homogeneous linear recursion +with integer coefficients. In this paper it is proved that such recursions +exist for any rotation symmetric function $f_n;$ such a function is generated +by some sum of $t$ monomials of various degrees. The last section of the paper +gives a Mathematica program which explicitly computes the homogeneous linear +recursion for the weights, given any rotation symmetric $f_n.$ The reader who +is only interested in finding some recursions can use the program and not be +concerned with the details of the rather complicated proofs in this paper. +" +Generalized forbidden subposet problems," A subfamily $\{F_1,F_2,\dots,F_{|P|}\}\subseteq {\cal F}$ of sets is a copy +of a poset $P$ in ${\cal F}$ if there exists a bijection $\phi:P\rightarrow +\{F_1,F_2,\dots,F_{|P|}\}$ such that whenever $x \le_P x'$ holds, then so does +$\phi(x)\subseteq \phi(x')$. For a family ${\cal F}$ of sets, let $c(P,{\cal +F})$ denote the number of copies of $P$ in ${\cal F}$, and we say that ${\cal +F}$ is $P$-free if $c(P,{\cal F})=0$ holds. For any two posets $P,Q$ let us +denote by $La(n,P,Q)$ the maximum number of copies of $Q$ over all $P$-free +families ${\cal F} \subseteq 2^{[n]}$, i.e. $\max\{c(Q,{\cal F}): {\cal F} +\subseteq 2^{[n]}, c(P,{\cal F})=0 \}$. +This generalizes the well-studied parameter $La(n,P)=La(n,P,P_1)$ where $P_1$ +is the one element poset. The quantity $La(n,P)$ has been determined (precisely +or asymptotically) for many posets $P$, and in all known cases an +asymptotically best construction can be obtained by taking as many middle +levels as possible without creating a copy of $P$. +In this paper we consider the first instances of the problem of determining +$La(n,P,Q)$. We find its value when $P$ and $Q$ are small posets, like chains, +forks, the $N$ poset and diamonds. Already these special cases show that the +extremal families are completely different from those in the original $P$-free +cases: sometimes not middle or consecutive levels maximize $La(n,P,Q)$ and +sometimes no asymptotically extremal family is the union of levels. +Finally, we determine the maximum number of copies of complete multi-level +posets in $k$-Sperner families. The main tools for this are the profile +polytope method and two extremal set system problems that are of independent +interest: we maximize the number of $r$-tuples $A_1,A_2,\dots, A_r \in {\cal +A}$ over all antichains ${\cal A}\subseteq 2^{[n]}$ such that (i) +$\cap_{i=1}^rA_i=\emptyset$, (ii) $\cap_{i=1}^rA_i=\emptyset$ and +$\cup_{i=1}^rA_i=[n]$. +" +Radiative Heat Transfer Between Core-Shell Nanoparticles," Radiative heat transfer in systems with core-shell nanoparticles may exhibit +not only a combination of disparate physical properties of its components but +also further enhanced properties that arise from the synergistic properties of +the core and shell components. We study the thermal conductance between two +core-shell nanoparticles (CSNPs). The contribution of electric and magnetic +dipole moments to the thermal conductance depend sensitively on the core and +shell materials, and adjustable by core size and shell thickness. We predict +that the radiative heat transfer in a dimer of Au@SiO2 CSNPs (i.e., +silica-coated gold nanoparticles) could be enhanced several order of magnitude +compared to bare Au nanoparticles. However, the reduction of several orders of +magnitude in the heat transfer is possible between SiO2@Au CSNPs (i.e., silica +as a core and gold as a shell) than that of uncoated SiO2 nanoparticles. +" +Robust Rigid Point Registration based on Convolution of Adaptive Gaussian Mixture Models," Matching 3D rigid point clouds in complex environments robustly and +accurately is still a core technique used in many applications. This paper +proposes a new architecture combining error estimation from sample covariances +and dual global probability alignment based on the convolution of adaptive +Gaussian Mixture Models (GMM) from point clouds. Firstly, a novel adaptive GMM +is defined using probability distributions from the corresponding points. Then +rigid point cloud alignment is performed by maximizing the global probability +from the convolution of dual adaptive GMMs in the whole 2D or 3D space, which +can be efficiently optimized and has a large zone of accurate convergence. +Thousands of trials have been conducted on 200 models from public 2D and 3D +datasets to demonstrate superior robustness and accuracy in complex +environments with unpredictable noise, outliers, occlusion, initial rotation, +shape and missing points. +" +Attitudes of Children with Autism towards Robots: An Exploratory Study," In this exploratory study we assessed how attitudes of children with autism +spectrum disorder (ASD) towards robots together with children's autism-related +social impairments are linked to indicators of children's preference of an +interaction with a robot over an interaction with a person. We found that +children with ASD have overall positive attitudes towards robots and that they +often prefer interacting with a robot than with a person. Several of children's +attitudes were linked to children's longer gazes towards a robot compared to a +person. Autism-related social impairments were linked to more repetitive and +stereotyped behaviors and to a shorter gaze duration in the interaction with +the robot compared to the person. These preliminary results contribute to +better understand factors that might help determine sub-groups of children with +ASD for whom robots could be particularly useful. +" +Electromagnetic dipole moments of charged baryons with bent crystals at the LHC," We propose a unique program of measurements of electric and magnetic dipole +moments of charm, beauty and strange charged baryons at the LHC, based on the +phenomenon of spin precession of channeled particles in bent crystals. Studies +of crystal channeling and spin precession of positively- and negatively-charged +particles are presented, along with feasibility studies and expected +sensitivities for the proposed experiment using a layout based on the LHCb +detector. +" +Distributed Gauss-Newton Method for State Estimation Using Belief Propagation," We present a novel distributed Gauss-Newton method for the non-linear state +estimation (SE) model based on a probabilistic inference method called belief +propagation (BP). The main novelty of our work comes from applying BP +sequentially over a sequence of linear approximations of the SE model, akin to +what is done by the Gauss-Newton method. The resulting iterative Gauss-Newton +belief propagation (GN-BP) algorithm can be interpreted as a distributed +Gauss-Newton method with the same accuracy as the centralized SE, however, +introducing a number of advantages of the BP framework. The paper provides +extensive numerical study of the GN-BP algorithm, provides details on its +convergence behavior, and gives a number of useful insights for its +implementation. +" +A two-dimensional index to quantify both scientific research impact and scope," Modern management of research is increasingly based on quantitative +bibliometric indices. Nowadays, the h-index is a major measure of research +output that has supplanted all other citation-based indices. In this context, +indicators that complement the h-index by evaluating different facets of +research achievement are compelling. As an additional bibliometric source that +can be easily extracted from available databases, we propose to use the number +of distinct journals Nj in which an individual's papers were published. We show +that Nj is independent of citation counts, and argue that it is a relevant +indicator of research scope, since it quantifies readership extent and +scientific multidisciplinarity. Combining the h-index and Nj, we define a +two-dimensional index (H,M) that measures both the output (through H) and the +outreach (through M) of individual's research. In order to probe the relevance +of this two dimensional index, we have analysed the scientific production of a +panel of physicists belonging to the same Department but with different +research themes. The analysis of bibliometric data confirms that the two +indices are uncorrelated and shows that while H reliably ranks the impact of +researchers, M accurately sorts multidisciplinary or readership aspects. We +conclude that the two indices together offer a more complete picture of +research performance and can be applied to either individuals, research groups +or institutions. +" +"Dynamic programming algorithms, efficient solution of the LP-relaxation and approximation schemes for the Penalized Knapsack Problem"," We consider the 0-1 Penalized Knapsack Problem (PKP). Each item has a profit, +a weight and a penalty and the goal is to maximize the sum of the profits minus +the greatest penalty value of the items included in a solution. We propose an +exact approach relying on a procedure which narrows the relevant range of +penalties, on the identification of a core problem and on dynamic programming. +The proposed approach turns out to be very effective in solving hard instances +of PKP and compares favorably both to commercial solver CPLEX 12.5 applied to +the ILP formulation of the problem and to the best available exact algorithm in +the literature. Then we present a general inapproximability result and +investigate several relevant special cases which permit fully polynomial time +approximation schemes (FPTASs). +" +Non-reciprocal Light-harvesting Nanoantennae Made by Nature," From a point of view of classical electrodynamics, the performance of +two-dimensional shape-simplified antennae is discussed based upon the shape of +naturally designed systems to harvest light. We explain the reason that the +function of the notch at the complex, the function of the PufX presented at the +notch, the function of the special pair, the bacteriochlorophylls are +dielectric instead of conductor, and a mechanism to prevent damages from excess +sunlight. The non-heme iron at the reaction center, the toroidal shape of the +light harvestor, the functional role played by the long-lasting spectrometric +signal observed, and the photon anti-bunching observed suggest non-reciprocity. +Our model has the required structural information automatically built in. We +further comment about how our prediction might be verified experimentally. +" +Analysis of a SIRI epidemic model with distributed delay and relapse," We investigate the global behaviour of a SIRI epidemic model with distributed +delay and relapse. From the theory of functional differential equations with +delay, we prove that the solution of the system is unique, bounded, and +positive, for all time. The basic reproduction number $R_{0}$ for the model is +computed. By means of the direct Lyapunov method and LaSalle invariance +principle, we prove that the disease free equilibrium is globally +asymptotically stable when $R_{0} < 1$. Moreover, we show that there is a +unique endemic equilibrium, which is globally asymptotically stable, when +$R_{0} > 1$. +" +TensOrMachine: Probabilistic Boolean Tensor Decomposition," Boolean tensor decomposition approximates data of multi-way binary +relationships as product of interpretable low-rank binary factors, following +the rules of Boolean algebra. Here, we present its first probabilistic +treatment. We facilitate scalable sampling-based posterior inference by +exploitation of the combinatorial structure of the factor conditionals. Maximum +a posteriori decompositions feature higher accuracies than existing techniques +throughout a wide range of simulated conditions. Moreover, the probabilistic +approach facilitates the treatment of missing data and enables model selection +with much greater accuracy. We investigate three real-world data-sets. First, +temporal interaction networks in a hospital ward and behavioural data of +university students demonstrate the inference of instructive latent patterns. +Next, we decompose a tensor with more than 10 billion data points, indicating +relations of gene expression in cancer patients. Not only does this demonstrate +scalability, it also provides an entirely novel perspective on relational +properties of continuous data and, in the present example, on the molecular +heterogeneity of cancer. Our implementation is available on GitHub: +this https URL. +" +Word Embeddings: A Survey," This work lists and describes the main recent strategies for building +fixed-length, dense and distributed representations for words, based on the +distributional hypothesis. These representations are now commonly called word +embeddings and, in addition to encoding surprisingly good syntactic and +semantic information, have been proven useful as extra features in many +downstream NLP tasks. +" +The State of Sustainable Research Software: Results from the Workshop on Sustainable Software for Science: Practice and Experiences (WSSSPE5.1)," This article summarizes motivations, organization, and activities of the +Workshop on Sustainable Software for Science: Practice and Experiences +(WSSSPE5.1) held in Manchester, UK in September 2017. The WSSSPE series +promotes sustainable research software by positively impacting principles and +best practices, careers, learning, and credit. This article discusses the Code +of Conduct, idea papers, position papers, experience papers, demos, and +lightning talks presented during the workshop. The main part of the article +discusses the speed-blogging groups that formed during the meeting, along with +the outputs of those sessions. +" +Complex Analysis of Real Functions III: Extended Fourier Theory," In the context of the complex-analytic structure within the unit disk +centered at the origin of the complex plane, that was presented in a previous +paper, we show that the complete Fourier theory of integrable real functions is +contained within that structure, that is, within the structure of the space of +inner analytic functions on the open unit disk. We then extend the Fourier +theory beyond the realm of integrable real functions, to include for example +singular Schwartz distributions, and possibly other objects. +" +Topological density estimation," We introduce \emph{topological density estimation} (TDE), in which the +multimodal structure of a probability density function is topologically +inferred and subsequently used to perform bandwidth selection for kernel +density estimation. We show that TDE has performance and runtime advantages +over competing methods of kernel density estimation for highly multimodal +probability density functions. We also show that TDE yields useful auxiliary +information, that it can determine its own suitability for use, and we explain +its performance. +" +From Bayesian Sparsity to Gated Recurrent Nets," The iterations of many first-order algorithms, when applied to minimizing +common regularized regression functions, often resemble neural network layers +with pre-specified weights. This observation has prompted the development of +learning-based approaches that purport to replace these iterations with +enhanced surrogates forged as DNN models from available training data. For +example, important NP-hard sparse estimation problems have recently benefitted +from this genre of upgrade, with simple feedforward or recurrent networks +ousting proximal gradient-based iterations. Analogously, this paper +demonstrates that more powerful Bayesian algorithms for promoting sparsity, +which rely on complex multi-loop majorization-minimization techniques, mirror +the structure of more sophisticated long short-term memory (LSTM) networks, or +alternative gated feedback networks previously designed for sequence +prediction. As part of this development, we examine the parallels between +latent variable trajectories operating across multiple time-scales during +optimization, and the activations within deep network structures designed to +adaptively model such characteristic sequences. The resulting insights lead to +a novel sparse estimation system that, when granted training data, can estimate +optimal solutions efficiently in regimes where other algorithms fail, including +practical direction-of-arrival (DOA) and 3D geometry recovery problems. The +underlying principles we expose are also suggestive of a learning process for a +richer class of multi-loop algorithms in other domains. +" +Unified estimation framework for unnormalized models with statistical efficiency," Parameter estimation of unnormalized models is a challenging problem because +normalizing constants are not calculated explicitly and maximum likelihood +estimation is computationally infeasible. Although some consistent estimators +have been proposed earlier, the problem of statistical efficiency does remain. +In this study, we propose a unified, statistically efficient estimation +framework for unnormalized models and several novel efficient estimators with +reasonable computational time regardless of whether the sample space is +discrete or continuous. The loss functions of the proposed estimators are +derived by combining the following two methods: (1) density-ratio matching +using Bregman divergence, and (2) plugging-in nonparametric estimators. We also +analyze the properties of the proposed estimators when the unnormalized model +is misspecified. Finally, the experimental results demonstrate the advantages +of our method over existing approaches. +" +Spectrally tunable linear polarization rotation using stacked metallic metamaterials," We theoretically study the transmission properties of a stack of metallic +metamaterials and show that is able to achieve a perfect transmission +selectively exhibiting broadband ($Q<10$) or extremely narrowband ($Q>10^5$) +polarization rotation. We especially highlight how the arrangement of the +stacked structure, as well as the metamaterial unit cell geometry, can highly +influence the transmission in the spectral domain. For this purpose, we use an +extended analytical Jones formalism that allows us to get a rigorous and +analytical expression of the transmission. Such versatile structures could find +potential applications in polarimetry or in the control of the light +polarization for THz waves. +" +On the Computational Complexity of Variants of Combinatorial Voter Control in Elections," Voter control problems model situations in which an external agent tries +toaffect the result of an election by adding or deleting the fewest number of +voters. The goal of the agent is to make a specific candidate either win +(\emph{constructive} control) or lose (\emph{destructive} control) the +election. We study the constructive and destructive voter control problems +whenadding and deleting voters have a \emph{combinatorial flavor}: If we add +(resp.\ delete) a voter~$v$, we also add (resp.\ delete) a bundle~$\kappa(v) $ +of voters that are associated with~$v$. While the bundle~$\kappa(v)$ may have +more than one voter, a voter may also be associated with more than one voter. +We analyze the computational complexity of the four voter control problems for +the Plurality rule. We obtain that, in general, making a candidate lose is +computationally easier than making her win. In particular, if the bundling +relation is symmetric (i.e.\ $\forall w\colon w \in \kappa(v) \Leftrightarrow v +\in \kappa(w) $), and if each voter has at most two voters associated with him, +then destructive control is polynomial-time solvable while the constructive +variant remains $\NP$-hard. Even if the bundles are disjoint (i.e.\ $\forall +w\colon w \in \kappa(v) \Leftrightarrow \kappa(v) = \kappa(w) $), the +constructive problem variants remain intractable. Finally, the minimization +variant of constructive control by adding voters does not admit an efficient +approximation algorithm, unless P=NP. +" +Stochastic Submodular Maximization: The Case of Coverage Functions," Stochastic optimization of continuous objectives is at the heart of modern +machine learning. However, many important problems are of discrete nature and +often involve submodular objectives. We seek to unleash the power of stochastic +continuous optimization, namely stochastic gradient descent and its variants, +to such discrete problems. We first introduce the problem of stochastic +submodular optimization, where one needs to optimize a submodular objective +which is given as an expectation. Our model captures situations where the +discrete objective arises as an empirical risk (e.g., in the case of +exemplar-based clustering), or is given as an explicit stochastic model (e.g., +in the case of influence maximization in social networks). By exploiting that +common extensions act linearly on the class of submodular functions, we employ +projected stochastic gradient ascent and its variants in the continuous domain, +and perform rounding to obtain discrete solutions. We focus on the rich and +widely used family of weighted coverage functions. We show that our approach +yields solutions that are guaranteed to match the optimal approximation +guarantees, while reducing the computational cost by several orders of +magnitude, as we demonstrate empirically. +" +A non-singular theory of dislocations in anisotropic crystals," We develop a non-singular theory of three-dimensional dislocation loops in a +particular version of Mindlin's anisotropic gradient elasticity with up to six +length scale parameters. The theory is systematically developed as a +generalization of the classical anisotropic theory in the framework of +linearized incompatible elasticity. The non-singular version of all key +equations of anisotropic dislocation theory are derived as line integrals, +including the Burgers displacement equation with isolated solid angle, the +Peach-Koehler stress equation, the Mura-Willis equation for the elastic +distortion, and the Peach-Koehler force. The expression for the interaction +energy between two dislocation loops as a double line integral is obtained +directly, without the use of a stress function. It is shown that all the +elastic fields are non-singular, and that they converge to their classical +counterparts a few characteristic lengths away from the dislocation core. In +practice, the non-singular fields can be obtained from the classical ones by +replacing the classical (singular) anisotropic Green's tensor with the +non-singular anisotropic Green's tensor derived by \cite{Lazar:2015ja}. The +elastic solution is valid for arbitrary anisotropic media. In addition to the +classical anisotropic elastic constants, the non-singular Green's tensor +depends on a second order symmetric tensor of length scale parameters modeling +a weak non-locality, whose structure depends on the specific class of crystal +symmetry. The anisotropic Helmholtz operator defined by such tensor admits a +Green's function which is used as the spreading function for the Burgers vector +density. As a consequence, the Burgers vector density spreads differently in +different crystal structures. +" +A Modularized Efficient Framework for Non-Markov Time Series Estimation," We present a compartmentalized approach to finding the maximum a-posteriori +(MAP) estimate of a latent time series that obeys a dynamic stochastic model +and is observed through noisy measurements. We specifically consider modern +signal processing problems with non-Markov signal dynamics (e.g. group +sparsity) and/or non-Gaussian measurement models (e.g. point process +observation models used in neuroscience). Through the use of auxiliary +variables in the MAP estimation problem, we show that a consensus formulation +of the alternating direction method of multipliers (ADMM) enables iteratively +computing separate estimates based on the likelihood and prior and subsequently +""averaging"" them in an appropriate sense using a Kalman smoother. As such, this +can be applied to a broad class of problem settings and only requires modular +adjustments when interchanging various aspects of the statistical model. Under +broad log-concavity assumptions, we show that the separate estimation problems +are convex optimization problems and that the iterative algorithm converges to +the MAP estimate. As such, this framework can capture non-Markov latent time +series models and non-Gaussian measurement models. We provide example +applications involving (i) group-sparsity priors, within the context of +electrophysiologic specrotemporal estimation, and (ii) non-Gaussian measurement +models, within the context of dynamic analyses of learning with neural spiking +and behavioral observations. +" +A holistic look at requirements engineering practices in the gaming industry," In this work we present an account of the status of requirements engineering +in the gaming industry. Recent papers in the area were surveyed. +Characterizations of the gaming industry were deliberated upon by portraying +its relations with the market industry. Some research directions in the area of +requirements engineering in the gaming industry were also mentioned. +" +Hamilton paths with lasting separation," We determine the asymptotics of the largest cardinality of a set of Hamilton +paths in the complete graph with vertex set [n] under the condition that for +any two of the paths in the family there is a subpath of length k entirely +contained in only one of them and edge{disjoint from the other one. +" +2D SLAM Quality Evaluation Methods," SLAM (Simultaneous Localization and mapping) is one of the most challenging +problems for mobile platforms and there is a huge amount of modern SLAM +algorithms. The choice of the algorithm that might be used in every particular +problem requires prior knowledge about advantages and disadvantages of each +algorithm. This paper presents the approach for comparison of SLAM algorithms +that allows to find the most accurate one. The accent of research is made on 2D +SLAM algorithms and the focus of analysis is 2D map that is built after +algorithm performance. Three metrics for evaluation of maps are presented in +this paper +" +Communication: Two-structure thermodynamics unifying all scenarios for water anomalies," Anomalous behavior of water in superooled region (namely decrease of density +and sharp increase of response functions at atmospheric pressure) are manly +associated with either existence of liquid-liquid criticality or re-entering +vapor-liquid spinodal into positive pressure. Despite different origin both +scenarios introduce divergence of the response functions. We articulate in the +communication that the criticality is behind water anomalies, water has normal +vapor-liquid spinodal and that the re-entering feature was predicted because of +curved density surface as a consequence of existence of critical point. We can +proof this by new two structure equation of state with criticality resulting +from non-ideal mixing of two states and a background state being modeled by +exactly the same equation of state which was deployed for predicting of +vapor-liquid spinodal while formulating the re-entering scenario. +" +The scattering problem for the $abcd$ Boussinesq system in the energy space," The Boussinesq $abcd$ system is a 4-parameter set of equations posed in +$\mathbb{R}_t \times \mathbb{R}_x$, originally derived by Bona, Chen and Saut +as first order 2-wave approximations of the incompressible and irrotational, +two dimensional water wave equations in the shallow water wave regime, in the +spirit of the original Boussinesq derivation. Among many particular regimes, +depending each of them in terms of the value of the parameters $(a,b,c,d)$ +present in the equations, the ""generic"" regime is characterized by the setting +$b,d>0$ and $a,c<0$. The system is hamiltonian if also $b=d$. The equations in +this regime are globally well-posed in the energy space $H^1\times H^1$, +provided one works with small solutions. In this paper, we investigate decay +and the scattering problem in this regime, which is characterized as having +(quadratic) long-range nonlinearities, very weak linear decay $O(t^{-1/3})$ +because of the one dimensional setting, and existence of non scattering +solutions (solitary waves). We prove, among other results, that for a +sufficiently dispersive $abcd$ systems (characterized only in terms of +parameters $a, b$ and $c$), all small solutions must decay to zero, locally +strongly in the energy space, in proper subset of the light cone $|x|\leq |t|$. +We prove this result by constructing three suitable virial functionals in the +spirit of works by Kowalczyk, Martel and the second author, and more recently +by the last three authors (valid for the simpler scalar ""good Boussinesq"" +model), leading to global in time decay and control of all local $H^1\times +H^1$ terms. No parity nor extra decay assumptions are needed to prove decay, +only small solutions in the energy space. +" +Directed clustering in weighted networks: a new perspective," In this paper, we consider the problem of assessing local clustering in +complex networks. Various definitions for this measure have been proposed for +the cases of networks having weighted edges, but less attention has been paid +to both weighted and directed networks. We provide a new local clustering +coefficient for this kind of networks, starting from those existing in the +literature for the weighted and undirected case. Furthermore, we extract from +our coefficient four specific components, in order to separately consider +different link patterns of triangles. Empirical applications on several real +networks from different frameworks and with different order are provided. The +performance of our coefficient is also compared with that of existing +coefficients in the literature. +" +Deep Learning Can Reverse Photon Migration for Diffuse Optical Tomography," Can artificial intelligence (AI) learn complicated non-linear physics? Here +we propose a novel deep learning approach that learns non-linear photon +scattering physics and obtains accurate 3D distribution of optical anomalies. +In contrast to the traditional black-box deep learning approaches to inverse +problems, our deep network learns to invert the Lippmann-Schwinger integral +equation which describes the essential physics of photon migration of diffuse +near-infrared (NIR) photons in turbid media. As an example for clinical +relevance, we applied the method to our prototype diffuse optical tomography +(DOT). We show that our deep neural network, trained with only simulation data, +can accurately recover the location of anomalies within biomimetic phantoms and +live animals without the use of an exogenous contrast agent. +" +A new extended Cardioid model: an application to wind data," The Cardioid distribution is a relevant model for circular data. However, +this model is not suitable for scenarios were there is asymmetry or +multimodality. In order to overcome this gap, an extended Cardioid model is +proposed, which is called Exponentiated Cardioid (EC) distribution. Besides, +some of its properties are derived, such as trigonometric moments, kurtosis and +skewness. A discussion about the modality and and expressions for the quantiles +through approximations of the studied model are also presented. To fit the EC +model, two estimation methods are presented based on maximum likelihood and +quantile least squares procedures. The performance of proposed estimators is +evaluated in a Monte Carlo simulation study, adopting both bias and mean square +error as comparison criteria. Finally, the proposed model is applied to a +dataset in the wind direction context. Results indicate that the EC +distribution may outperform Cardioid and the von Mises distributions. +" +On the rank and the convergence rate towards the Sato-Tate measure," Let $A$ be an abelian variety defined over a number field and let $G$ denote +its Sato-Tate group. Under the assumption of certain standard conjectures on +$L$-functions attached to the irreducible representations of $G$, we study the +convergence rate of any virtual selfdual character of $G$. We find that this +convergence rate is dictated by several arithmetic invariants of $A$, such as +its rank or its Sato-Tate group $G$. The results are consonant with some +previous experimental observations, and we also provide additional numerical +evidence consistent with them. The techniques that we use were introduced by +Sarnak, in order to explain the bias in the sign of the Frobenius traces of an +elliptic curve without complex multiplication defined over $\mathbb{Q}$. We +show that the same methods can be adapted to study the convergence rate of the +characters of its Sato-Tate group, and that they can also be employed in the +more general case of abelian varieties over number fields. A key tool in our +analysis is the existence of limiting distributions for automorphic +$L$-functions, which is due to Akbary, Ng, and Shahabi. +" +Adaptive Sampling for Convex Regression," In this paper, we introduce the first principled adaptive-sampling procedure +for learning a convex function in the $L_\infty$ norm, a problem that arises +often in the behavioral and social sciences. We present a function-specific +measure of complexity and use it to prove that, for each convex function +$f_{\star}$, our algorithm nearly attains the information-theoretically +optimal, function-specific error rate. We also corroborate our theoretical +contributions with numerical experiments, finding that our method substantially +outperforms passive, uniform sampling for favorable synthetic and data-derived +functions in low-noise settings with large sampling budgets. Our results also +suggest an idealized ""oracle strategy"", which we use to gauge the potential +advance of any adaptive-sampling strategy over passive sampling, for any given +convex function. +" +Time Accuracy Analysis of Post-Mediation Packet-Switched Charging Data Records for Urban Mobility Applications," Telecommunication data is being used increasingly in urban mobility +applications around the world. Despite its ubiquity and usefulness, technical +difficulties arise when using Packet-Switched Charging Data Records (CDR), +since its main purpose was not intended for this kind of applications. Due to +its particular nature, a trade-off must be considered between accessibility and +time accuracy when using this data. On the one hand, to obtain highly accurate +timestamps, huge amounts of network-level CDR must be extracted and stored. +This task is very difficult and expensive since highly critical network node +applications can be compromised in the data extraction and storage. On the +other hand, post-mediation CDR can be easily accessed since no network node +application is involved in its analysis. The pay-off is in the lower accurate +timestamps recorded, since several aggregations and filtering is performed in +previous steps of the charging pipelines. In this work, a detailed description +of the timestamp error problem using post-mediation CDR is presented, together +with a methodology to analyze error time series collected in each network cell. +" +Multi-fidelity Bayesian Optimisation with Continuous Approximations," Bandit methods for black-box optimisation, such as Bayesian optimisation, are +used in a variety of applications including hyper-parameter tuning and +experiment design. Recently, \emph{multi-fidelity} methods have garnered +considerable attention since function evaluations have become increasingly +expensive in such applications. Multi-fidelity methods use cheap approximations +to the function of interest to speed up the overall optimisation process. +However, most multi-fidelity methods assume only a finite number of +approximations. In many practical applications however, a continuous spectrum +of approximations might be available. For instance, when tuning an expensive +neural network, one might choose to approximate the cross validation +performance using less data $N$ and/or few training iterations $T$. Here, the +approximations are best viewed as arising out of a continuous two dimensional +space $(N,T)$. In this work, we develop a Bayesian optimisation method, BOCA, +for this setting. We characterise its theoretical properties and show that it +achieves better regret than than strategies which ignore the approximations. +BOCA outperforms several other baselines in synthetic and real experiments. +" +X-rays writing/reading of Charge Density Waves in the CuO2 plane of a simple cuprate superconductor," It is now well established that superconductivity in cuprates competes with +charge modulations giving electronic phase separation at a nanoscale. More +specifically, superconducting electronic current takes root in the available +free space left by electronic charge ordered domains, called charge density +wave (CDW) puddles. This means that CDW domain arrangement plays a fundamental +role in the mechanism of high temperature superconductivity in cuprates. Here +we report about the possibility of controlling the population and spatial +organization of the charge density wave puddles in a single crystal La2CuO4+y +through X - ray illumination and thermal treatments. We apply a pump - probe +method based on X - ray illumination as pump and X - ray diffraction as a probe +setting a writing and reading procedure of CDW puddles. Our findings are +expected to allow new routes for advanced design and manipulation of +superconducting pathways in new electronics. +" +Alexandrov's theorem revisited," We show that among sets of finite perimeter balls are the only +volume-constrained critical points of the perimeter functional. +" +Autonomous Multirobot Excavation for Lunar Applications," In this paper, a control approach called Artificial Neural Tissue (ANT) is +applied to multirobot excavation for lunar base preparation tasks including +clearing landing pads and burying of habitat modules. We show for the first +time, a team of autonomous robots excavating a terrain to match a given 3D +blueprint. Constructing mounds around landing pads will provide physical +shielding from debris during launch/landing. Burying a human habitat modules +under 0.5 m of lunar regolith is expected to provide both radiation shielding +and maintain temperatures of -25 $^{o}$C. This minimizes base life-support +complexity and reduces launch mass. ANT is compelling for a lunar mission +because it doesn't require a team of astronauts for excavation and it requires +minimal supervision. The robot teams are shown to autonomously interpret +blueprints, excavate and prepare sites for a lunar base. Because little +pre-programmed knowledge is provided, the controllers discover creative +techniques. ANT evolves techniques such as slot-dozing that would otherwise +require excavation experts. This is critical in making an excavation mission +feasible when it is prohibitively expensive to send astronauts. The controllers +evolve elaborate negotiation behaviors to work in close quarters. These and +other techniques such as concurrent evolution of the controller and team size +are shown to tackle problem of antagonism, when too many robots interfere +reducing the overall efficiency or worse, resulting in gridlock. While many +challenges remain with this technology our work shows a compelling pathway for +field testing this approach. +" +Optimal Context Aware Transmission Strategy for non-Orthogonal D2D Communications," The increasing traffic demand in cellular networks has recently led to the +investigation of new strategies to save precious resources like spectrum and +energy. A possible solution employs direct device-to-device (D2D) +communications, which is particularly promising when the two terminals involved +in the communications are located in close proximity. The D2D communications +should coexist with other transmissions, so they must be careful scheduled in +order to avoid harmful interference impacts. In this paper, we analyze how a +distributed context-awareness, obtained by observing few local channel and +topology parameters, can be used to adaptively exploit D2D communications. We +develop a rigorous theoretical analysis to quantify the balance between the +gain offered by a D2D transmission, and its impact on the other network +communications. Based on this analysis, we derive two theorems that define the +optimal strategy to be employed, in terms of throughput maximization, when a +single or multiple transmit power levels are available for the D2D +communications. We compare this strategy to the state-of-the-art in the same +network scenario, showing how context awareness can be exploited to achieve a +higher sum throughput and an improved fairness. +" +Giant-spin nonlinear response theory of magnetic nanoparticle hyperthermia: a field dependence study," Understanding high-field amplitude electromagnetic heat loss phenomena is of +great importance, in particular in the biomedical field, since the +heat-delivery treatment plans might rely on analytical models that are only +valid at low field amplitudes. Here, we develop a nonlinear response model +valid for single- domain nanoparticles of larger particle sizes and higher +field amplitudes in comparison to linear response theory. A nonlinear +magnetization expression and a generalized heat loss power equation are +obtained and compared with the exact solution of the stochastic +Landau-Lifshitz-Gilbert equation assuming the giant-spin hypothesis. The model +is valid within the hyperthermia therapeutic window and predicts a shift of +optimum particle size and distinct heat loss field amplitude exponents. +Experimental hyperthermia data with distinct ferrite-based nanoparticles, as +well as third harmonic magnetization data supports the nonlinear model, which +also has implications for magnetic particle imaging and magnetic thermometry. +" +Anisotropy crossover in the frustrated Hubbard model on four-chain cylinders," Motivated by dimensional crossover in layered organic ${\kappa}$ salts, we +determine the phase diagram of a system of four periodically coupled Hubbard +chains with frustration at half filling as a function of the interchain hopping +${t_{\perp}/t}$ and interaction strength ${U/t}$ at a fixed ratio of +frustration and interchain hopping ${t'/t_{\perp}=-0.5}$. We cover the range +from the one-dimensional limit of uncoupled chains (${t_{\perp}/t=0.0}$) to the +isotropic model (${t_{\perp}/t=1.0}$). For strong ${U/t}$, we find an +antiferromagnetic insulator; in the weak-to-moderate-interaction regime, the +phase diagram features quasi-one-dimensional antiferromagnetic behavior, an +incommensurate spin-density wave, and a metallic phase as ${t_{\perp}/t}$ is +increased. We characterize the phases through their magnetic ordering, +dielectric response, and dominant static correlations. Our analysis is based +primarily on a variant of the density-matrix renormalization-group algorithm +based on an efficient hybrid-real-momentum-space formulation, in which we can +treat relatively large lattices albeit of a limited width. This is complemented +by a variational cluster approximation study with a cluster geometry +corresponding to the cylindrical lattice allowing us to directly compare the +two methods for this geometry. As an outlook, we make contact with work +studying dimensional crossover in the full two-dimensional system. +" +On the concavity of a sum of elementary symmetric polynomials," We introduce a new problem on the elementary symmetric polynomials +$\sigma_k$, stemming from the constraint equations of some modified gravity +theory. For which coefficients is a linear combination of $\sigma_k$ +$1/p$-concave, with $0 \leq k \leq p$? We establish connections between the +$1/p$-concavity and the real-rootedness of some polynomials built on the +coefficients. We conjecture that if the restriction of the linear combination +to the positive diagonal is a real-rooted polynomial, then the linear +combination is $1/p$-concave. Using the theory of hyperbolic polynomials, we +show that this would be implied by a short algebraic statement: if the +polynomials $P$ and $Q$ of degree $n$ are real-rooted, then $\sum_{k=0}^n +P^{(k)}Q^{(n-k)}$ is real-rooted as well. This is not proven yet. We conjecture +more generally that the global $1/p$-concavity is equivalent to the +$1/p$-concavity on the positive diagonal. We prove all our guessings for $p=2$. +The way is open for further developments. +" +MIT Autonomous Vehicle Technology Study: Large-Scale Deep Learning Based Analysis of Driver Behavior and Interaction with Automation," For the foreseeble future, human beings will likely remain an integral part +of the driving task, monitoring the AI system as it performs anywhere from just +over 0% to just under 100% of the driving. The governing objectives of the MIT +Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale +real-world driving data collection that includes high-definition video to fuel +the development of deep learning based internal and external perception +systems, (2) gain a holistic understanding of how human beings interact with +vehicle automation technology by integrating video data with vehicle state +data, driver characteristics, mental models, and self-reported experiences with +technology, and (3) identify how technology and other factors related to +automation adoption and use can be improved in ways that save lives. In +pursuing these objectives, we have instrumented 21 Tesla Model S and Model X +vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 +vehicles for both long-term (over a year per driver) and medium term (one month +per driver) naturalistic driving data collection. Furthermore, we are +continually developing new methods for analysis of the massive-scale dataset +collected from the instrumented vehicle fleet. The recorded data streams +include IMU, GPS, CAN messages, and high-definition video streams of the driver +face, the driver cabin, the forward roadway, and the instrument cluster (on +select vehicles). The study is on-going and growing. To date, we have 99 +participants, 11,846 days of participation, 405,807 miles, and 5.5 billion +video frames. This paper presents the design of the study, the data collection +hardware, the processing of the data, and the computer vision algorithms +currently being used to extract actionable knowledge from the data. +" +Propagation estimates in the one-commutator theory," In the abstract framework of Mourre theory, the propagation of states is +understood in terms of a conjugate operator $A$. A powerful estimate has long +been known for Hamiltonians having a good regularity with respect to $A$ thanks +to the limiting absorption principle (LAP). We study the case where $H$ has +less regularity with respect to $A$, specifically in a situation where the LAP +and the absence of singularly continuous spectrum have not yet been +established. We show that in this case the spectral measure of $H$ is a +Rajchman measure and we derive some propagation estimates. One estimate is an +application of minimal escape velocities, while the other estimate relies on an +improved version of the RAGE formula. Based on several examples, including +continuous and discrete Schrödinger operators, it appears that the latter +propagation estimate is a new result for multi-dimensional Hamiltonians. +" +Collocation Methods for Exploring Perturbations in Linear Stability Analysis," Eigenvalue analysis is a well-established tool for stability analysis of +dynamical systems. However, there are situations where eigenvalues miss some +important features of physical models. For example, in models of incompressible +fluid dynamics, there are examples where linear stability analysis predicts +stability but transient simulations exhibit significant growth of infinitesimal +perturbations. This behavior can be predicted by pseudo-spectral analysis. In +this study, we show that an approach similar to pseudo-spectral analysis can be +performed inexpensively using stochastic collocation methods and the results +can be used to provide quantitative information about instability. In addition, +we demonstrate that the results of the perturbation analysis provide insight +into the behavior of unsteady flow simulations. +" +Outlier Robust Online Learning," We consider the problem of learning from noisy data in practical settings +where the size of data is too large to store on a single machine. More +challenging, the data coming from the wild may contain malicious outliers. To +address the scalability and robustness issues, we present an online robust +learning (ORL) approach. ORL is simple to implement and has provable robustness +guarantee -- in stark contrast to existing online learning approaches that are +generally fragile to outliers. We specialize the ORL approach for two concrete +cases: online robust principal component analysis and online linear regression. +We demonstrate the efficiency and robustness advantages of ORL through +comprehensive simulations and predicting image tags on a large-scale data set. +We also discuss extension of the ORL to distributed learning and provide +experimental evaluations. +" +Sparsity Regularization and feature selection in large dimensional data," Feature selection has evolved to be an important step in several machine +learning paradigms. Especially in the domains of bio-informatics and text +classification which involve data of high dimensions, feature selection can +help in drastically reducing the feature space. In cases where it is difficult +or infeasible to obtain sufficient number of training examples, feature +selection helps overcome the curse of dimensionality which in turn helps +improve performance of the classification algorithm. The focus of our research +here are five embedded feature selection methods which use either the ridge +regression, or Lasso regression, or a combination of the two in the +regularization part of the optimization function. We evaluate five chosen +methods on five large dimensional datasets and compare them on the parameters +of sparsity and correlation in the datasets and their execution times. +" +The Diverse Cohort Selection Problem," How should a firm allocate its limited interviewing resources to select the +optimal cohort of new employees from a large set of job applicants? How should +that firm allocate cheap but noisy resume screenings and expensive but in-depth +in-person interviews? We view this problem through the lens of combinatorial +pure exploration (CPE) in the multi-armed bandit setting, where a central +learning agent performs costly exploration of a set of arms before selecting a +final subset with some combinatorial structure. We generalize a recent CPE +algorithm to the setting where arm pulls can have different costs, and return +different levels of information, and prove theoretical upper bounds for a +general class of arm-pulling strategies in this new setting. We then apply our +general algorithm to a real-world problem with combinatorial structure: +incorporating diversity into university admissions. We take real data from +admissions at one of the largest US-based computer science graduate programs +and show that a simulation of our algorithm produces more diverse student +cohorts at low cost to individual student quality, spending comparable budget +to the current admissions process at that university. +" +Recurrent fast radio bursts from collisions of neutron stars in the evolved stellar clusters," We propose the model describing the observed multiple fast radio bursts due +to the close encounters and collisions of neutron stars in the central clusters +of the evolved galactic nuclei. The subsystem of neutron star cluster may +originate in the dense galactic nucleus evolutionary in the combined processes +of stellar and dynamical evolution. The neutron stars in the compact cluster +can produce the short living binaries with the highly eccentric orbits, and +finally collide after several orbital revolutions. Fast radio bursts may be +produced during the close periastron approach and at the process of the final +binary merging. In the sufficiently dense star cluster the neutron stars +collisions can be very frequent. Therefore, this model can explain in principle +the observed recurrent (multiple) fast radio bursts, analogous to the observed +ones from the source FRB 121102. Among the possible observational signatures of +the proposed model may be the registration of the gravitational wave bursts by +the laser interferometers LIGO/VIRGO or by the next generation of gravitational +wave detectors. +" +Emotion Recognition From Speech With Recurrent Neural Networks," In this paper the task of emotion recognition from speech is considered. +Proposed approach uses deep recurrent neural network trained on a sequence of +acoustic features calculated over small speech intervals. At the same time +special probabilistic-nature CTC loss function allows to consider long +utterances containing both emotional and neutral parts. The effectiveness of +such an approach is shown in two ways. Firstly, the comparison with recent +advances in this field is carried out. Secondly, human performance on the same +task is measured. Both criteria show the high quality of the proposed method. +" +Intra- and intercycle interference of angle-resolved electron emission in laser assisted XUV atomic ionization," A theoretical study of ionization of the hydrogen atom due to an XUV pulse in +the presence of an IR laser is presented. Well-established theories are usually +used to describe the problem of laser assisted photoelectron effect. However, +the well-known soft-photon approximation firstly posed by Maquet et al in +Journal of Modern Optics 54, 1847 (2007) and Kazansky's theory in Phys. Rev. A +82, 033420 (2010) completely fails to predict the electron emission +prependicularly to the polarization direction. Making use of a simple +semiclassical model, we study the angle-resolved energy distribution of +photoelectrons for the case that both fields are linearly polarized in the same +direction. We thoroughly analize and characterize two different emission +regions in the angle-energy domain: (i) the parallel-like region with +contribution of two classical trajectories per optical cycle and (ii) the +perpendicular-like region with contribution of four classical trajectories per +optical cycle. We show that our semiclassical model is able to asses the +interference patterns of the angle-resolved photoelectron spectrum in the two +different mentioned regions. Electron trajectories stemming from different +optical laser cycles give rise to angle-independent intercycle interference +known as sidebands. These sidebands are modulated by an angle-dependent +coarse-grained structure coming from the intracycle interference of the +electron trajectories born during the same optical cycle. We show the accuracy +of our semiclassical model as a function of the time delay between the IR and +the XUV pulses and also as a function of the laser intensity by comparing the +semiclassical predictions of the angle-resolved photoelectron spectrum with the +continuum-distorted wave strong field approximation and the ab initio solution +of the time dependent Schrödinger equation. +" +Inferring transport characteristics in a fractured rock aquifer by combining single-hole GPR reflection monitoring and tracer test data," Investigations of solute transport in fractured rock aquifers often rely on +tracer test data acquired at a limited number of observation points. Such data +do not, by themselves, allow detailed assessments of the spreading of the +injected tracer plume. To better understand the transport behavior in a +granitic aquifer, we combine tracer test data with single-hole +ground-penetrating radar (GPR) reflection monitoring data. Five successful +tracer tests were performed under various experimental conditions between two +boreholes 6 m apart. For each experiment, saline tracer was injected into a +previously identified packed-off transmissive fracture while repeatedly +acquiring single-hole GPR reflection profiles together with electrical +conductivity logs in the pumping borehole. By analyzing depth-migrated GPR +difference images together with tracer breakthrough curves and associated +simplified flow and transport modeling, we estimate (1) the number, the +connectivity, and the geometry of fractures that contribute to tracer +transport, (2) the velocity and the mass of tracer that was carried along each +flow path, and (3) the effective transport parameters of the identified flow +paths. We find a qualitative agreement when comparing the time evolution of GPR +reflectivity strengths at strategic locations in the formation with those +arising from simulated transport. The discrepancies are on the same order as +those between observed and simulated breakthrough curves at the outflow +locations. The rather subtle and repeatable GPR signals provide useful and +complementary information to tracer test data acquired at the outflow locations +and may help us to characterize transport phenomena in fractured rock aquifers. +" +Robust Audio Watermarking Algorithm Based on Moving Average and DCT," Noise is often brought to host audio by common signal processing operation, +and it usually changes the high-frequency component of an audio signal. So +embedding watermark by adjusting low-frequency coefficient can improve the +robustness of a watermark scheme. Moving Average sequence is a low-frequency +feature of an audio signal. This work proposed a method which embedding +watermark into the maximal coefficient in discrete cosine transform domain of a +moving average sequence. Subjective and objective tests reveal that the +proposed watermarking scheme maintains highly audio quality, and +simultaneously, the algorithm is highly robust to common digital signal +processing operations, including additive noise, sampling rate change, bit +resolution transformation, MP3 compression, and random cropping, especially +low-pass filtering. +" +Resource-Efficient Common Randomness and Secret-Key Schemes," We study common randomness where two parties have access to i.i.d. samples +from a known random source, and wish to generate a shared random key using +limited (or no) communication with the largest possible probability of +agreement. This problem is at the core of secret key generation in +cryptography, with connections to communication under uncertainty and locality +sensitive hashing. We take the approach of treating correlated sources as a +critical resource, and ask whether common randomness can be generated +resource-efficiently. +We consider two notable sources in this setup arising from correlated bits +and correlated Gaussians. We design the first explicit schemes that use only a +polynomial number of samples (in the key length) so that the players can +generate shared keys that agree with constant probability using optimal +communication. The best previously known schemes were both non-constructive and +used an exponential number of samples. In the amortized setting, we +characterize the largest achievable ratio of key length to communication in +terms of the external and internal information costs, two well-studied +quantities in theoretical computer science. In the relaxed setting where the +two parties merely wish to improve the correlation between the generated keys +of length $k$, we show that there are no interactive protocols using $o(k)$ +bits of communication having agreement probability even as small as +$2^{-o(k)}$. For the related communication problem where the players wish to +compute a joint function $f$ of their inputs using i.i.d. samples from a known +source, we give a zero-communication protocol using $2^{O(c)}$ bits where $c$ +is the interactive randomized public-coin communication complexity of $f$. This +matches the lower bound shown previously while the best previously known upper +bound was doubly exponential in $c$. +" +Exploring Outliers in Crowdsourced Ranking for QoE," Outlier detection is a crucial part of robust evaluation for crowdsourceable +assessment of Quality of Experience (QoE) and has attracted much attention in +recent years. In this paper, we propose some simple and fast algorithms for +outlier detection and robust QoE evaluation based on the nonconvex optimization +principle. Several iterative procedures are designed with or without knowing +the number of outliers in samples. Theoretical analysis is given to show that +such procedures can reach statistically good estimates under mild conditions. +Finally, experimental results with simulated and real-world crowdsourcing +datasets show that the proposed algorithms could produce similar performance to +Huber-LASSO approach in robust ranking, yet with nearly 8 or 90 times speed-up, +without or with a prior knowledge on the sparsity size of outliers, +respectively. Therefore the proposed methodology provides us a set of helpful +tools for robust QoE evaluation with crowdsourcing data. +" +The distribution of $G$-Weyl CM fields and the Colmez conjecture," Assuming a weak form of the upper bound in Malle's conjecture, we prove that +the Colmez conjecture is true for $100\%$ of CM fields of any fixed degree, +when ordered by discriminant. This weak form of the upper bound in Malle's +conjecture is known in many cases, which allows us to produce infinitely many +density-one families of non-abelian CM fields which satisfy the Colmez +conjecture. +" +Improving Grey-Box Fuzzing by Modeling Program Behavior," Grey-box fuzzers such as American Fuzzy Lop (AFL) are popular tools for +finding bugs and potential vulnerabilities in programs. While these fuzzers +have been able to find vulnerabilities in many widely used programs, they are +not efficient; of the millions of inputs executed by AFL in a typical fuzzing +run, only a handful discover unseen behavior or trigger a crash. The remaining +inputs are redundant, exhibiting behavior that has already been observed. Here, +we present an approach to increase the efficiency of fuzzers like AFL by +applying machine learning to directly model how programs behave. We learn a +forward prediction model that maps program inputs to execution traces, training +on the thousands of inputs collected during standard fuzzing. This learned +model guides exploration by focusing on fuzzing inputs on which our model is +the most uncertain (measured via the entropy of the predicted execution trace +distribution). By focusing on executing inputs our learned model is unsure +about, and ignoring any input whose behavior our model is certain about, we +show that we can significantly limit wasteful execution. Through testing our +approach on a set of binaries released as part of the DARPA Cyber Grand +Challenge, we show that our approach is able to find a set of inputs that +result in more code coverage and discovered crashes than baseline fuzzers with +significantly fewer executions. +" +A Convex Reconstruction Model for X-ray Tomographic Imaging with Uncertain Flat-fields," Classical methods for X-ray computed tomography are based on the assumption +that the X-ray source intensity is known, but in practice, the intensity is +measured and hence uncertain. Under normal operating conditions, when the +exposure time is sufficiently high, this kind of uncertainty typically has a +negligible effect on the reconstruction quality. However, in time- or +dose-limited applications such as dynamic CT, this uncertainty may cause severe +and systematic artifacts known as ring artifacts. By carefully modeling the +measurement process and by taking uncertainties into account, we derive a new +convex model that leads to improved reconstructions despite poor quality +measurements. We demonstrate the effectiveness of the methodology based on +simulated and real data sets. +" +Mathematical Backdoors in Symmetric Encryption Systems - Proposal for a Backdoored AES-like Block Cipher," Recent years have shown that more than ever governments and intelligence +agencies try to control and bypass the cryptographic means used for the +protection of data. Backdooring encryption algorithms is considered as the best +way to enforce cryptographic control. Until now, only implementation backdoors +(at the protocol/implementation/management level) are generally considered. In +this paper we propose to address the most critical issue of backdoors: +mathematical backdoors or by-design backdoors, which are put directly at the +mathematical design of the encryption algorithm. While the algorithm may be +totally public, proving that there is a backdoor, identifying it and exploiting +it, may be an intractable problem. We intend to explain that it is probably +possible to design and put such backdoors. Considering a particular family +(among all the possible ones), we present BEA-1, a block cipher algorithm which +is similar to the AES and which contains a mathematical backdoor enabling an +operational and effective cryptanalysis. The BEA-1 algorithm (80-bit block +size, 120-bit key, 11 rounds) is designed to resist to linear and differential +cryptanalyses. A challenge will be proposed to the cryptography community soon. +Its aim is to assess whether our backdoor is easily detectable and exploitable +or not. +" +Kinetostatic analysis and solution classification of a class of planar tensegrity mechanisms," Tensegrity mechanisms are composed of rigid and tensile parts that are in +equilibrium. They are interesting alternative designs for some applications, +such as modelling musculo-skeleton systems. Tensegrity mechanisms are more +difficult to analyze than classical mechanisms as the static equilibrium +conditions that must be satisfied generally result in complex equations. A +class of planar one-degree-of-freedom tensegrity mechanisms with three linear +springs is analyzed in detail for the sake of systematic solution +classifications. The kinetostatic equations are derived and solved under +several loading and geometric conditions. It is shown that these mechanisms +exhibit up to six equilibrium configurations, of which one or two are stable, +depending on the geometric and loading conditions. Discriminant varieties and +cylindrical algebraic decomposition combined with Groebner base elimination are +used to classify solutions as a function of the geometric, loading and actuator +input parameters. +" +A Framework to Support the Trust Process in News and Social Media," Current society is heavily influenced by the spread of online information, +containing all sorts of claims, commonly found in news stories, tweets, and +social media postings. Depending on the user, they may be considered ""true"" or +""false"", according to the agent's trust on the claim. In this paper, we discuss +the concept of content trust and trust process, and propose a framework to +describe the trust process, which can support various possible models of +content trust. +" +Optimal control of Rydberg lattice gases," We present optimal control protocols to prepare different many-body quantum +states of Rydberg atoms in optical lattices. Specifically, we show how to +prepare highly ordered many-body ground states, GHZ states as well as some +superposition of symmetric excitation number Fock states, that inherit the +translational symmetry from the Hamiltonian, within sufficiently short +excitation times minimizing detrimental decoherence effects. For the GHZ +states, we propose a two-step detection protocol to experimentally verify the +optimal preparation of the target state based only on standard measurement +techniques. Realistic experimental constraints and imperfections are taken into +account by our optimization procedure making it applicable to ongoing +experiments. +" +Sharp resolvent bounds and resonance-free regions," In this note, we consider semiclassical scattering on a manifold which is +Euclidean near infinity or asymptotically hyperbolic. We show that, if the +cut-off resolvent satisfies polynomial estimates in a strip of size $O(h |\log +h|^{-\alpha})$ below the real axis, for some $\alpha\geq 0$, then the cut-off +resolvent is actually bounded by $O(|\log h|^{\alpha+1} h^{-1})$ in this strip. +As an application, we improve slightly the estimates on the real axis given by +Bourgain and Dyatlov in the case of convex co-compact surfaces. +" +Automatic estimation of harmonic tension by distributed representation of chords," The buildup and release of a sense of tension is one of the most essential +aspects of the process of listening to music. A veridical computational model +of perceived musical tension would be an important ingredient for many music +informatics applications. The present paper presents a new approach to +modelling harmonic tension based on a distributed representation of chords. The +starting hypothesis is that harmonic tension as perceived by human listeners is +related, among other things, to the expectedness of harmonic units (chords) in +their local harmonic context. We train a word2vec-type neural network to learn +a vector space that captures contextual similarity and expectedness, and define +a quantitative measure of harmonic tension on top of this. To assess the +veridicality of the model, we compare its outputs on a number of well-defined +chord classes and cadential contexts to results from pertinent empirical +studies in music psychology. Statistical analysis shows that the model's +predictions conform very well with empirical evidence obtained from human +listeners. +" +Online Natural Gradient as a Kalman Filter," We cast Amari's natural gradient in statistical learning as a specific case +of Kalman filtering. Namely, applying an extended Kalman filter to estimate a +fixed unknown parameter of a probabilistic model from a series of observations, +is rigorously equivalent to estimating this parameter via an online stochastic +natural gradient descent on the log-likelihood of the observations. +In the i.i.d. case, this relation is a consequence of the ""information +filter"" phrasing of the extended Kalman filter. In the recurrent (state space, +non-i.i.d.) case, we prove that the joint Kalman filter over states and +parameters is a natural gradient on top of real-time recurrent learning (RTRL), +a classical algorithm to train recurrent models. +This exact algebraic correspondence provides relevant interpretations for +natural gradient hyperparameters such as learning rates or initialization and +regularization of the Fisher information matrix. +" +Cyclic period oscillation of the eclipsing dwarf nova DV UMa," DV UMa is an eclipsing dwarf nova with an orbital period of $\sim2.06$ h, +which lies just at the bottom edge of the period gap. To detect its orbital +period changes we present 12 new mid-eclipse times by using our CCD photometric +data and archival data. Combining with the published mid-eclipse times in +quiescence, spanning $\sim30$ yr, the latest version of the $O-C$ diagram was +obtained and analyzed. The best fit to those available eclipse timings shows +that the orbital period of DV UMa is undergoing a cyclic oscillation with a +period of $17.58(\pm0.52)$ yr and an amplitude of $71.1(\pm6.7)$ s. The +periodic variation most likely arises from the light-travel-time effect via the +presence of a circumbinary object because the required energy to drive the +Applegate mechanism is too high in this system. The mass of the unseen +companion was derived as $M_{3}\sin{i'}=0.025(\pm0.004)M_{\odot}$. If the third +body is in the orbital plane (i.e. $i'=i=82.9^{\circ}$) of the eclipsing pair, +it would match to a brown dwarf. This hypothetical brown dwarf is orbiting its +host star at a separation of $\sim8.6$ AU in an eccentric orbit ($e=0.44$). +" +Shallow-water models for a vibrating fluid," We consider a layer of an inviscid fluid with free surface which is subject +to vertical high-frequency vibrations. We derive three asymptotic systems of +equations that describe slowly evolving (in comparison with the vibration +frequency) free-surface waves. The first set of equations is obtained without +assuming that the waves are long. These equations are as difficult to solve as +the exact equations for irrotational water waves in a non-vibrating fluid. The +other two models describe long waves. These models are obtained under two +different assumptions about the amplitude of the vibration. Surprisingly, the +governing equations have exactly the same form in both cases (up to +interpretation of some constants). These equations reduce to the standard +dispersionless shallow-water equations if the vibration is absent, and the +vibration manifests itself via an additional term which makes the equations +dispersive and, for small-amplitude waves, is similar to the term that would +appear if surface tension were taken into account. We show that our dispersive +shallow water equations have both solitary and periodic travelling waves +solutions and discuss an analogy between these solutions and travelling +capillary-gravity waves in a non-vibrating fluid. +" +Electrically Tunable Optical Nonlinearities in Graphene-Covered SiN Waveguides Characterized by Four-Wave Mixing," We present a degenerate four-wave mixing experiment on a silicon nitride +(SiN) waveguide covered with gated graphene. We observe strong dependencies on +signal-pump detuning and Fermi energy, i.e. the optical nonlinearity is +demonstrated to be electrically tunable. In the vicinity of the interband +absorption edge ($2|E_F|\approx \hbar\omega$) a peak value of the waveguide +nonlinear parameter of $\approx$ 6400 m$^{-1}$W$^{-1}$, corresponding to a +graphene nonlinear sheet conductivity $|\sigma_s^{(3)}|\approx4.3\cdot +10^{-19}$ A m$^2$V$^{-3}$ is measured. +" +Binary superlattice design by controlling DNA-mediated interactions," Most binary superlattices created using DNA functionalization or other +approaches rely on particle size differences to achieve compositional order and +structural diversity. Here we study two-dimensional (2D) assembly of +DNA-functionalized micron-sized particles (DFPs), and employ a strategy that +leverages the tunable disparity in interparticle interactions, and thus +enthalpic driving forces, to open new avenues for design of binary +superlattices that do not rely on the ability to tune particle size (i.e., +entropic driving forces). Our strategy employs tailored blends of complementary +strands of ssDNA to control interparticle interactions between micron-sized +silica particles in a binary mixture to create compositionally diverse 2D +lattices. We show that the particle arrangement can be further controlled by +changing the stoichiometry of the binary mixture in certain cases. With this +approach, we demonstrate the abil- ity to program the particle assembly into +square, pentagonal, and hexagonal lattices. In addition, different particle +types can be compositionally ordered in square checkerboard and hexagonal - +alternating string, honeycomb, and Kagome arrangements. +" +"A New Use of Douglas-Rachford Splitting and ADMM for Identifying Infeasible, Unbounded, and Pathological Conic Programs"," In this paper, we present a method for identifying infeasible, unbounded, and +pathological conic programs based on Douglas-Rachford splitting, or +equivalently ADMM. When an optimization program is infeasible, unbounded, or +pathological, the iterates of Douglas-Rachford splitting diverge. Somewhat +surprisingly, such divergent iterates still provide useful information, which +our method uses for identification. In addition, for strongly infeasible +problems the method produces a separating hyperplane and informs the user on +how to minimally modify the given problem to achieve strong feasibility. As a +first-order method, the proposed algorithm relies on simple subroutines, and +therefore is simple to implement and has low per-iteration cost. +" +A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics," We study the Stochastic Gradient Langevin Dynamics (SGLD) algorithm for +non-convex optimization. The algorithm performs stochastic gradient descent, +where in each step it injects appropriately scaled Gaussian noise to the +update. We analyze the algorithm's hitting time to an arbitrary subset of the +parameter space. Two results follow from our general theory: First, we prove +that for empirical risk minimization, if the empirical risk is point-wise close +to the (smooth) population risk, then the algorithm achieves an approximate +local minimum of the population risk in polynomial time, escaping suboptimal +local minima that only exist in the empirical risk. Second, we show that SGLD +improves on one of the best known learnability results for learning linear +classifiers under the zero-one loss. +" +Shallow Water Dynamics on Linear Shear Flows and Plane Beaches," Long waves in shallow water propagating over a background shear flow towards +a sloping beach are being investigated. The classical shallow-water equations +are extended to incorporate both a background shear flow and a linear beach +profile, resulting in a non-reducible hyperbolic system. Nevertheless, it is +shown how several changes of variables based on the hodograph transform may be +used to transform the system into a linear equation which may be solved exactly +using the method of separation of variables. This method can be used to +investigate the run-up of a long wave on a planar beach including the +development of the shoreline. +" +p-Multigrid matrix-free discontinuous Galerkin solution strategies for the under-resolved simulation of incompressible turbulent flows," In recent years several research efforts focused on the development of +high-order discontinuous Galerkin (dG) methods for scale resolving simulations +of turbulent flows. Nevertheless, in the context of incompressible flow +computations, the computational expense of solving large scale equation systems +characterized by indefinite Jacobian matrices has often prevented from dealing +with industrially-relevant computations. In this work we seek to improve the +efficiency of Rosenbrock-type linearly-implicit Runge-Kutta methods by devising +robust, scalable and memory-lean solution strategies. In particular, we +combined p-multigrid preconditioners with matrix-free Krylov iterative solvers: +the p-multigrid preconditioner relies on specifically crafted +rescaled-inherited coarse operators and cheap block-diagonal smoother's +preconditioners to obtain satisfactory convergence rates and a low memory +occupation. Extensive numerical validation is performed. The rescaled-inherited +p-multigrid algorithm for the BR2 dG discretization is firstly validated +solving Poisson problems. The Rosenbrock formulation is then applied to test +cases of growing complexity: the laminar unsteady flow around a two-dimensional +cylinder at Re=200 and around a sphere at Re=300, and finally the transitional +T3L1 flow problem of the ERCOFTAC test case suite with different levels of +free-stream turbulence. For the latter good agreement with experimental data is +documented, moreover, strong memory savings and execution time gains with +respect to state-of-the art preconditioners are reported. +" +Internet - assisted risk assessment of infectious diseases in women sexual and reproductive health," We develop open source infection risk calculators for patients and healthcare +professionals as apps for hospital acquired infections (during child-delivery) +and sexually transmitted infections (like HIV). Advanced versions of ehealth in +non-communicable diseases do not apply to epidemiology much. There is, however, +no infection risk calculator in the Polish Internet so far, despite the +existence of data that may be applied to create such a tool. +The algorithms involve data from Information Systems (like HIS in hospitals) +and surveys by applying mathematical modelling, Bayesian inference, logistic +regressions, covariance analysis and social network analysis. Finally, user may +fill or import data from Information System to obtain risk assessment and test +different settings to learn overall risk. +The most promising risk calculator is developed for Healthcare-associated +infections in modes for patient hospital sanitary inspection. The most extended +version for hospital epidemiologists may include many layers of hospital +interactions by agent-based modeling. Simplified version of calculator is +dedicated to patients that require personalized hospitalization history of +pregnancy described by questions represented by quantitative and qualitative +variables. Patients receive risk assessment from interactive web application +with additional description about modifiable risk factors. +We also provide solution for sexually transmitted infections like HIV. The +results of calculations with meaningful description and percentage chances are +presented in real-time to interested users. Finally, user fills the form to +obtain risk assessment for given settings. +" +Neural Program Meta-Induction," Most recently proposed methods for Neural Program Induction work under the +assumption of having a large set of input/output (I/O) examples for learning +any underlying input-output mapping. This paper aims to address the problem of +data and computation efficiency of program induction by leveraging information +from related tasks. Specifically, we propose two approaches for cross-task +knowledge transfer to improve program induction in limited-data scenarios. In +our first proposal, portfolio adaptation, a set of induction models is +pretrained on a set of related tasks, and the best model is adapted towards the +new task using transfer learning. In our second approach, meta program +induction, a $k$-shot learning approach is used to make a model generalize to +new tasks without additional training. To test the efficacy of our methods, we +constructed a new benchmark of programs written in the Karel programming +language. Using an extensive experimental evaluation on the Karel benchmark, we +demonstrate that our proposals dramatically outperform the baseline induction +method that does not use knowledge transfer. We also analyze the relative +performance of the two approaches and study conditions in which they perform +best. In particular, meta induction outperforms all existing approaches under +extreme data sparsity (when a very small number of examples are available), +i.e., fewer than ten. As the number of available I/O examples increase (i.e. a +thousand or more), portfolio adapted program induction becomes the best +approach. For intermediate data sizes, we demonstrate that the combined method +of adapted meta program induction has the strongest performance. +" +Model Selection for Anomaly Detection," Anomaly detection based on one-class classification algorithms is broadly +used in many applied domains like image processing (e.g. detection of whether a +patient is ""cancerous"" or ""healthy"" from mammography image), network intrusion +detection, etc. Performance of an anomaly detection algorithm crucially depends +on a kernel, used to measure similarity in a feature space. The standard +approaches (e.g. cross-validation) for kernel selection, used in two-class +classification problems, can not be used directly due to the specific nature of +a data (absence of a second, abnormal, class data). In this paper we generalize +several kernel selection methods from binary-class case to the case of +one-class classification and perform extensive comparison of these approaches +using both synthetic and real-world data. +" +Lines of descent in the deterministic mutation-selection model with pairwise interaction," We consider the mutation-selection differential equation with pairwise +interaction and establish the corresponding ancestral process, which is a +specific random tree and a variant of the ancestral selection graph. To make +this object tractable, we prune the tree upon mutation, thus reducing it to its +informative parts. The hierarchies inherent in the tree are encoded +systematically via ternary trees with weighted leaves; this leads to the +stratified ancestral selection graph. The latter is dual to the +mutation-selection equation and provides a stochastic representation of its +solution. It also allows to reveal the genealogical structures inherent in the +bifurcations of the equilibria of the differential equation. Furthermore, we +establish constructions, again based on the stratified ancestral selection +graph, that allow to trace back the ancestral lines into the distant past and +obtain explicit results about the ancestral population in the case of +unidirectional mutation. +" +"On the challenges of learning with inference networks on sparse, high-dimensional data"," We study parameter estimation in Nonlinear Factor Analysis (NFA) where the +generative model is parameterized by a deep neural network. Recent work has +focused on learning such models using inference (or recognition) networks; we +identify a crucial problem when modeling large, sparse, high-dimensional +datasets -- underfitting. We study the extent of underfitting, highlighting +that its severity increases with the sparsity of the data. We propose methods +to tackle it via iterative optimization inspired by stochastic variational +inference \citep{hoffman2013stochastic} and improvements in the sparse data +representation used for inference. The proposed techniques drastically improve +the ability of these powerful models to fit sparse data, achieving +state-of-the-art results on a benchmark text-count dataset and excellent +results on the task of top-N recommendation. +" +Circumcenter extension of Moebius maps to CAT(-1) spaces," Given a Moebius homeomorphism $f : \partial X \to \partial Y$ between +boundaries of proper, geodesically complete CAT(-1) spaces $X,Y$, we describe +an extension $\hat{f} : X \to Y$ of $f$, called the circumcenter map of $f$, +which is constructed using circumcenters of expanding sets. The extension +$\hat{f}$ is shown to coincide with the $(1, \log 2)$-quasi-isometric extension +constructed in [biswas3], and is locally $1/2$-Holder continuous. When $X,Y$ +are complete, simply connected manifolds with sectional curvatures $K$ +satisfying $-b^2 \leq K \leq -1$ for some $b \geq 1$ then the extension +$\hat{f} : X \to Y$ is a $(1, (1 - \frac{1}{b})\log 2)$-quasi-isometry. +Circumcenter extension of Moebius maps is natural with respect to composition +with isometries. +" +A Comparison Study of Two High Accuracy Numerical Methods for a Parabolic System in Air Pollution Modelling," We present two approaches for enhancing the accuracy of second order finite +difference approximations of two-dimensional semilinear parabolic systems. +These are the fourth order compact difference scheme and the fourth order +scheme based on Richardson extrapolation. Our interest is concentrated on a +system of ten parabolic partial differential equations in air pollution +modeling. We analyze numerical experiments to compare the two approaches with +respect to accuracy, computational complexity, non-negativity preserving and +etc. Sixth-order approximation based on the fourth-order compact difference +scheme combined with Richardson extrapolation is also discussed numerically. +" +Transition from electromagnetically induced transparency to Autler-Townes splitting in cold cesium atoms," Electromagnetically induced transparency (EIT) and Aulter-Townes splitting +(ATS) are two similar yet distinct phenomena that modify the transmission of a +weak probe field through an absorption medium in the presence of a coupling +field, featured in a variety of three-level atomic systems. In many +applications it is important to distinguish EIT from ATS splitting. We present +EIT and ATS spectra in a cold-atom three-level cascade system, involving the +35$S_{1/2}$ Rydberg state of cesium. The EIT linewidth, $\gamma_{EIT}$, defined +as the full width at half maximum (FWHM), and the ATS splitting, +$\gamma_{ATS}$, defined as the peak-to-peak distance between AT peak pairs, are +used to delineate the EIT and ATS regimes and to characterize the transition +between the regimes. In the cold-atom medium, in the weak-coupler (EIT) regime +$\gamma_{EIT}$ $\approx$ A + B($\Omega_{c}^2$ + $\Omega_{p}^2)/\Gamma_{eg}$, +where $\Omega_{c}$ and $\Omega_{p}$ are the coupler and probe Rabi frequencies, +$\Gamma_{eg}$ is the spontaneous decay rate of the intermediate 6P$_{3/2}$ +level, and parameters $A$ and $B$ that depend on the laser linewidth. We +explore the transition into the strong-coupler (ATS) regime, which is +characterized by the linear relation $\gamma_{ATS}$ $\approx$ $\Omega_{c}$. The +experiments are in agreement with numerical solutions of the Master equation. +" +SIMCom: Statistical Sniffing of Inter-Module Communications for Run-time Hardware Trojan Detection," Timely detection of Hardware Trojans (HT) has become a major challenge for +secure integrated circuits. We present a run-time methodology for HT detection +that employs a multi-parameter statistical traffic modeling of the +communication channel in a given System-on-Chip (SoC). Towards this, it +leverages the Hurst exponent, the standard deviation of the injection +distribution and hop distribution jointly to accurately identify HT-based +online anomalies. At design time, our methodology employs a property +specification language to define and embed assertions in the RTL, specifying +the correct communication behavior of a given SoC. At runtime, it monitors the +anomalies in the communication behavior by checking the execution patterns +against these assertions. We evaluate our methodology for detecting HTs in +MC8051 microcontrollers. The experimental results show that with the combined +analysis of multiple statistical parameters, our methodology is able to detect +all the benchmark Trojans (available on trust-hub) inserted in MC8051, which +directly or indirectly affect the communication-channels in SoC. +" +Sum of Square Proof for Brascamp-Lieb Type Inequality," Brascamp-Lieb inequality is an important mathematical tool in analysis, +geometry and information theory. There are various ways to prove Brascamp-Lieb +inequality such as heat flow method, Brownian motion and subadditivity of the +entropy. While Brascamp-Lieb inequality is originally stated in Euclidean +Space, discussed Brascamp-Lieb inequality for discrete Abelian group and +discussed Brascamp-Lieb inequality for Markov semigroups. +Many mathematical inequalities can be formulated as algebraic inequalities +which asserts some given polynomial is nonnegative. In 1927, Artin proved that +any non- negative polynomial can be represented as a sum of squares of rational +functions, which can be further formulated as a polynomial certificate of the +nonnegativity of the polynomial. This is a Sum of Square proof of the +inequality. Take the degree of the polynomial certificate as the degree of Sum +of Square proof. The degree of an Sum of Square proof determines the complexity +of generating such proof by Sum of Square algorithm which is a powerful tool +for optimization and computer aided proof. +In this paper, we give a Sum of Square proof for some special settings of +Brascamp- Lieb inequality following and and discuss some applications of +Brascamp-Lieb inequality on Abelian group and Euclidean Sphere. If the original +description of the inequality has constant degree and d is constant, the degree +of the proof is also constant. Therefore, low degree sum of square algorithm +can fully capture the power of low degree finite Brascamp-Lieb inequality. +" +Robust Classification of Financial Risk," Algorithms are increasingly common components of high-impact decision-making, +and a growing body of literature on adversarial examples in laboratory settings +indicates that standard machine learning models are not robust. This suggests +that real-world systems are also susceptible to manipulation or +misclassification, which especially poses a challenge to machine learning +models used in financial services. We use the loan grade classification problem +to explore how machine learning models are sensitive to small changes in +user-reported data, using adversarial attacks documented in the literature and +an original, domain-specific attack. Our work shows that a robust optimization +algorithm can build models for financial services that are resistant to +misclassification on perturbations. To the best of our knowledge, this is the +first study of adversarial attacks and defenses for deep learning in financial +services. +" +The deformed Hermitian-Yang-Mills equation in geometry and physics," We provide an introduction to the mathematics and physics of the deformed +Hermitian-Yang-Mills equation, a fully nonlinear geometric PDE on Kahler +manifolds which plays an important role in mirror symmetry. We discuss the +physical origin of the equation, and some recent progress towards its solution. +In dimension 3 we prove a new Chern number inequality and discuss the +relationship with algebraic stability conditions. +" +How deep learning works --The geometry of deep learning," Why and how that deep learning works well on different tasks remains a +mystery from a theoretical perspective. In this paper we draw a geometric +picture of the deep learning system by finding its analogies with two existing +geometric structures, the geometry of quantum computations and the geometry of +the diffeomorphic template matching. In this framework, we give the geometric +structures of different deep learning systems including convolutional neural +networks, residual networks, recursive neural networks, recurrent neural +networks and the equilibrium prapagation framework. We can also analysis the +relationship between the geometrical structures and their performance of +different networks in an algorithmic level so that the geometric framework may +guide the design of the structures and algorithms of deep learning systems. +" +Goodness-of-Fit Tests for Random Partitions via Symmetric Polynomials," We consider goodness-of-fit tests with i.i.d. samples generated from a +categorical distribution $(p_1,...,p_k)$. For a given $(q_1,...,q_k)$, we test +the null hypothesis whether $p_j=q_{\pi(j)}$ for some label permutation $\pi$. +The uncertainty of label permutation implies that the null hypothesis is +composite instead of being singular. In this paper, we construct a testing +procedure using statistics that are defined as indefinite integrals of some +symmetric polynomials. This method is aimed directly at the invariance of the +problem, and avoids the need of matching the unknown labels. The asymptotic +distribution of the testing statistic is shown to be chi-squared, and its power +is proved to be nearly optimal under a local alternative hypothesis. Various +degenerate structures of the null hypothesis are carefully analyzed in the +paper. A two-sample version of the test is also studied. +" +Analysis of the non-Markovianity for electron transfer reactions in an oligothiophene-fullerene heterojunction," The non-Markovianity of the electron transfer in an oligothiophene-fullerene +heterojunction described by a spin-boson model is analyzed using the time +dependent decoherence canonical rates and the volume of accessible states in +the Bloch sphere. The dynamical map of the reduced electronic system is +computed by the hierarchical equations of motion methodology (HEOM) providing +an exact dynamics. Transitory witness of non-Markovianity is linked to the bath +dynamics analyzed from the HEOM auxiliary matrices. The signature of the +collective bath mode detected from HEOM in each electronic state is compared +with predictions of the effective mode extracted from the spectral density. We +show that including this main reaction coordinate in a one-dimensional vibronic +system coupled to a residual bath satisfactorily describes the electron +transfer by a simple Markovian Redfield equation. Non-Markovianity is computed +for three inter fragment distances and compared with a priori criterion based +on the system and bath characteristic timescales. +" +On Milne-Barbier-Unsöld relationships," This short review aims to clarify upon the origins of the so-called +Eddington-Barbier relationships, which relate the emergent specific intensity +and the flux to the photospheric source function at specific optical depths. +Here we discuss about the assumptions behind the original derivation of Barbier +(1943). We also point to the fact that Milne had already formulated these two +relations in 1921. +" +Block-diagonal Hessian-free Optimization for Training Neural Networks," Second-order methods for neural network optimization have several advantages +over methods based on first-order gradient descent, including better scaling to +large mini-batch sizes and fewer updates needed for convergence. But they are +rarely applied to deep learning in practice because of high computational cost +and the need for model-dependent algorithmic variations. We introduce a variant +of the Hessian-free method that leverages a block-diagonal approximation of the +generalized Gauss-Newton matrix. Our method computes the curvature +approximation matrix only for pairs of parameters from the same layer or block +of the neural network and performs conjugate gradient updates independently for +each block. Experiments on deep autoencoders, deep convolutional networks, and +multilayer LSTMs demonstrate better convergence and generalization compared to +the original Hessian-free approach and the Adam method. +" +Steklov eigenvalues of submanifolds with prescribed boundary in Euclidean space," We obtain upper and lower bounds for Steklov eigenvalues of submanifolds with +prescribed boundary in Euclidean space. A very general upper bound is proved, +which depends only on the geometry of the fixed boundary and on the measure of +the interior. Sharp lower bounds are given for hypersurfaces of revolution with +connected boundary: we prove that each eigenvalue is uniquely minimized by the +ball. We also observe that each surface of revolution with connected boundary +is isospectral to the disk. +" +Theoretical and Experimental Analysis of the Canadian Traveler Problem," Devising an optimal strategy for navigation in a partially observable +environment is one of the key objectives in AI. One of the problem in this +context is the Canadian Traveler Problem (CTP). CTP is a navigation problem +where an agent is tasked to travel from source to target in a partially +observable weighted graph, whose edge might be blocked with a certain +probability and observing such blockage occurs only when reaching upon one of +the edges end points. The goal is to find a strategy that minimizes the +expected travel cost. The problem is known to be P$\#$ hard. In this work we +study the CTP theoretically and empirically. First, we study the Dep-CTP, a CTP +variant we introduce which assumes dependencies between the edges status. We +show that Dep-CTP is intractable, and further we analyze two of its subclasses +on disjoint paths graph. Second, we develop a general algorithm Gen-PAO that +optimally solve the CTP. Gen-PAO is capable of solving two other types of CTP +called Sensing-CTP and Expensive-Edges CTP. Since the CTP is intractable, +Gen-PAO use some pruning methods to reduce the space search for the optimal +solution. We also define some variants of Gen-PAO, compare their performance +and show some benefits of Gen-PAO over existing work. +" +Ballpark Crowdsourcing: The Wisdom of Rough Group Comparisons," Crowdsourcing has become a popular method for collecting labeled training +data. However, in many practical scenarios traditional labeling can be +difficult for crowdworkers (for example, if the data is high-dimensional or +unintuitive, or the labels are continuous). +In this work, we develop a novel model for crowdsourcing that can complement +standard practices by exploiting people's intuitions about groups and relations +between them. We employ a recent machine learning setting, called Ballpark +Learning, that can estimate individual labels given only coarse, aggregated +signal over groups of data points. To address the important case of continuous +labels, we extend the Ballpark setting (which focused on classification) to +regression problems. We formulate the problem as a convex optimization problem +and propose fast, simple methods with an innate robustness to outliers. +We evaluate our methods on real-world datasets, demonstrating how useful +constraints about groups can be harnessed from a crowd of non-experts. Our +methods can rival supervised models trained on many true labels, and can obtain +considerably better results from the crowd than a standard label-collection +process (for a lower price). By collecting rough guesses on groups of instances +and using machine learning to infer the individual labels, our lightweight +framework is able to address core crowdsourcing challenges and train machine +learning models in a cost-effective way. +" +"How a dissimilar-chain system is splitting. Quasi-static, subsonic and supersonic regimes"," We consider parallel splitting of a strip composed of two different chains. +As a waveguide, the dissimilar-chain structure radically differs from the +well-studied identical-chain system. It is characterized by three speeds of the +long waves, two for the separate chains, and one for the intact strip where the +chains are connected. Accordingly, there exist three ranges, the subsonic for +both chains, the intersonic and the supersonic. The speed in the latter range +is supersonic for the intact strip and at the same time, it is subsonic for the +separate higher-speed chain. This fact allows the splitting wave to propagate +in the strip supersonically. We derive steady-state analytical solutions and +find that the splitting can propagate steadily only in two of these speed +ranges, the subsonic and the supersonic, whereas the intersonic regime is +forbidden. In the case of considerable difference in the chain stiffness, the +lowest dynamic threshold corresponds to the supersonic regime. The peculiarity +of the supersonic mode is that the supersonic energy delivery channel, being +initially absent, is opening with the moving splitting point. Based on the +discrete and related continuous models we find which regime can be implemented +depending on the structure parameters and loading conditions. The analysis +allows us to represent the characteristics of such processes and to demonstrate +strengths and weaknesses of different formulations, quasi-static, dynamic, +discrete or continuous. Analytical solutions for steady-state regimes are +obtained and analyzed in detail. We find the force-speed relations and show the +difference between the static and dynamic thresholds. The parameters and energy +of waves radiated by the propagating splitting are determined. We calculate the +strain distribution ahead of the transition point and check whether the +steady-state solutions are admissible. +" +Room-temperature solid state quantum emitters in the telecom range," On demand single photon emitters (SPEs) play a key role across a broad range +of quantum technologies, including quantum computation, quantum simulation, +quantum metrology and quantum communications. In quantum networks and quantum +key distribution protocols, where photons are employed as flying qubits, +telecom wavelength operation is preferred due to the reduced fibre loss. +However, despite the tremendous efforts to develop various triggered SPE +platforms, a robust source of triggered SPEs operating at room temperature and +the telecom wavelength is still missing. Here we report a triggered, optically +stable, room temperature solid state SPE operating at telecom wavelengths. The +emitters exhibit high photon purity (~ 5% multiphoton events) and a record-high +brightness of ~ 1.5 MHz. The emission is attributed to localized defects in a +gallium nitride (GaN) crystal. The high performance SPEs embedded in a +technologically mature semiconductor are promising for on-chip quantum +simulators and practical quantum communication technologies. +" +Join-completions of ordered algebras," We present a systematic study of join-extensions and join-completions of +ordered algebras, which naturally leads to a refined and simplified treatment +of fundamental results and constructions in the theory of ordered structures +ranging from properties of the Dedekind-MacNeille completion to the proof of +the finite embeddability property for a number of varieties of ordered +algebras. +" +SNR Wall for Cooperative Spectrum Sensing Using Generalized Energy Detector," Cognitive radio (CR) is a promising scheme to improve the spectrum +utilization. Spectrum sensing (SS) is one of the main tasks of CR. Cooperative +spectrum sensing (CSS) is used in CR to improve detection capability. Due to +its simplicity and low complexity, sensing based on energy detection known as +conventional energy detection (CED) is widely adopted. CED can be generalized +by changing the squaring operation of the amplitude of received samples by an +arbitrary positive power p which is referred to as the generalized energy +detector (GED). The performance of GED degrades when there exists noise +uncertainty (NU). In this paper, we investigate the performance of CSS by +considering the noise NU when all the secondary users (SUs) employ GED. We +derive the signal to noise ratio (SNR) wall for CSS for both hard and soft +decision combining. All the derived expressions are validated using Monte Carlo +(MC) simulations. +" +Automatic Breast Ultrasound Image Segmentation: A Survey," Breast cancer is one of the leading causes of cancer death among women +worldwide. In clinical routine, automatic breast ultrasound (BUS) image +segmentation is very challenging and essential for cancer diagnosis and +treatment planning. Many BUS segmentation approaches have been studied in the +last two decades, and have been proved to be effective on private datasets. +Currently, the advancement of BUS image segmentation seems to meet its +bottleneck. The improvement of the performance is increasingly challenging, and +only few new approaches were published in the last several years. It is the +time to look at the field by reviewing previous approaches comprehensively and +to investigate the future directions. In this paper, we study the basic ideas, +theories, pros and cons of the approaches, group them into categories, and +extensively review each category in depth by discussing the principles, +application issues, and advantages/disadvantages. +" +Global stability and uniform persistence of the reaction-convection-diffusion cholera epidemic model," We study the global stability issue of the reaction-convection-diffusion +cholera epidemic PDE model and show that the basic reproduction number serves +as a threshold parameter that predicts whether cholera will persist or become +globally extinct. Specifically, when the basic reproduction number is beneath +one, we show that the disease-free-equilibrium is globally attractive. On the +other hand, when the basic reproduction number exceeds one, if the infectious +hosts or the concentration of bacteria in the contaminated water are not +initially identically zero, we prove the uniform persistence result and that +there exists at least one positive steady state. +" +Learning Disordered Topological Phases by Statistical Recovery of Symmetry," In this letter, we apply the artificial neural network in a supervised manner +to map out the quantum phase diagram of disordered topological superconductor +in class DIII. Given the disorder that keeps the discrete symmetries of the +ensemble as a whole, translational symmetry which is broken in the +quasiparticle distribution individually is recovered statistically by taking an +ensemble average. By using this, we classify the phases by the artificial +neural network that learned the quasiparticle distribution in the clean limit, +and show that the result is totally consistent with the calculation by the +transfer matrix method or noncommutative geometry approach. If all three +phases, namely the $\mathbb{Z}_2$, trivial, and the thermal metal phases appear +in the clean limit, the machine can classify them with high confidence over the +entire phase diagram. If only the former two phases are present, we find that +the machine remains confused in the certain region, leading us to conclude the +detection of the unknown phase which is eventually identified as the thermal +metal phase. In our method, only the first moment of the quasiparticle +distribution is used for input, but application to a wider variety of systems +is expected by the inclusion of higher moments. +" +Stability of iterated function systems on the circle," We prove that any Iterated Function System of circle homeomorphisms with at +least one of them having dense orbit, is asymptotically stable. The +corresponding Perron-Frobenius operator is shown to satisfy the e-property, +that is, for any continuous function its iterates are equicontinuous. The +Strong Law of Large Numbers for trajectories starting from an arbitrary point +for such function systems is also proved. +" +On labeling Android malware signatures using minhashing and further classification with Structural Equation Models," Multi-scanner Antivirus systems provide insightful information on the nature +of a suspect application; however there is often a lack of consensus and +consistency between different Anti-Virus engines. In this article, we analyze +more than 250 thousand malware signatures generated by 61 different Anti-Virus +engines after analyzing 82 thousand different Android malware applications. We +identify 41 different malware classes grouped into three major categories, +namely Adware, Harmful Threats and Unknown or Generic signatures. We further +investigate the relationships between such 41 classes using community detection +algorithms from graph theory to identify similarities between them; and we +finally propose a Structure Equation Model to identify which Anti-Virus engines +are more powerful at detecting each macro-category. As an application, we show +how such models can help in identifying whether Unknown malware applications +are more likely to be of Harmful or Adware type. +" +Proceedings 3rd International Workshop on Symbolic and Numerical Methods for Reachability Analysis," Hybrid systems are complex dynamical systems that combine discrete and +continuous components. Reachability questions, regarding whether a system can +run into a certain subset of its state space, stand at the core of verification +and synthesis problems for hybrid systems. This volume contains papers +describing new developments in this area, which were presented at the 3rd +International Workshop on Symbolic and Numerical Methods for Reachability +Analysis. +" +Building Prior Knowledge: A Markov Based Pedestrian Prediction Model Using Urban Environmental Data," Autonomous Vehicles navigating in urban areas have a need to understand and +predict future pedestrian behavior for safer navigation. This high level of +situational awareness requires observing pedestrian behavior and extrapolating +their positions to know future positions. While some work has been done in this +field using Hidden Markov Models (HMMs), one of the few observed drawbacks of +the method is the need for informed priors for learning behavior. In this work, +an extension to the Growing Hidden Markov Model (GHMM) method is proposed to +solve some of these drawbacks. This is achieved by building on existing work +using potential cost maps and the principle of Natural Vision. As a +consequence, the proposed model is able to predict pedestrian positions more +precisely over a longer horizon compared to the state of the art. The method is +tested over ""legal"" and ""illegal"" behavior of pedestrians, having trained the +model with sparse observations and partial trajectories. The method, with no +training data, is compared against a trained state of the art model. It is +observed that the proposed method is robust even in new, previously unseen +areas. +" +Group Sparsity Residual Constraint for Image Denoising," Group-based sparse representation has shown great potential in image +denoising. However, most existing methods only consider the nonlocal +self-similarity (NSS) prior of noisy input image. That is, the similar patches +are collected only from degraded input, which makes the quality of image +denoising largely depend on the input itself. However, such methods often +suffer from a common drawback that the denoising performance may degrade +quickly with increasing noise levels. In this paper we propose a new prior +model, called group sparsity residual constraint (GSRC). Unlike the +conventional group-based sparse representation denoising methods, two kinds of +prior, namely, the NSS priors of noisy and pre-filtered images, are used in +GSRC. In particular, we integrate these two NSS priors through the mechanism of +sparsity residual, and thus, the task of image denoising is converted to the +problem of reducing the group sparsity residual. To this end, we first obtain a +good estimation of the group sparse coefficients of the original image by +pre-filtering, and then the group sparse coefficients of the noisy image are +used to approximate this estimation. To improve the accuracy of the nonlocal +similar patch selection, an adaptive patch search scheme is designed. +Furthermore, to fuse these two NSS prior better, an effective iterative +shrinkage algorithm is developed to solve the proposed GSRC model. Experimental +results demonstrate that the proposed GSRC modeling outperforms many +state-of-the-art denoising methods in terms of the objective and the perceptual +metrics. +" +Scaling Text with the Class Affinity Model," Probabilistic methods for classifying text form a rich tradition in machine +learning and natural language processing. For many important problems, however, +class prediction is uninteresting because the class is known, and instead the +focus shifts to estimating latent quantities related to the text, such as +affect or ideology. We focus on one such problem of interest, estimating the +ideological positions of 55 Irish legislators in the 1991 Dáil confidence +vote. To solve the Dáil scaling problem and others like it, we develop a text +modeling framework that allows actors to take latent positions on a ""gray"" +spectrum between ""black"" and ""white"" polar opposites. We are able to validate +results from this model by measuring the influences exhibited by individual +words, and we are able to quantify the uncertainty in the scaling estimates by +using a sentence-level block bootstrap. Applying our method to the Dáil +debate, we are able to scale the legislators between extreme pro-government and +pro-opposition in a way that reveals nuances in their speeches not captured by +their votes or party affiliations. +" +Resolving transition metal chemical space: feature selection for machine learning and structure-property relationships," Machine learning (ML) of quantum mechanical properties shows promise for +accelerating chemical discovery. For transition metal chemistry where accurate +calculations are computationally costly and available training data sets are +small, the molecular representation becomes a critical ingredient in ML model +predictive accuracy. We introduce a series of revised autocorrelation functions +(RACs) that encode relationships between the heuristic atomic properties (e.g., +size, connectivity, and electronegativity) on a molecular graph. We alter the +starting point, scope, and nature of the quantities evaluated in standard ACs +to make these RACs amenable to inorganic chemistry. On an organic molecule set, +we first demonstrate superior standard AC performance to other +presently-available topological descriptors for ML model training, with mean +unsigned errors (MUEs) for atomization energies on set-aside test molecules as +low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs +on set-aside test molecules in spin-state splitting in comparison to 15-20x +higher errors from feature sets that encode whole-molecule structural +information. Systematic feature selection methods including univariate +filtering, recursive feature elimination, and direct optimization (e.g., random +forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5x +smaller than RAC-155 produce sub- to 1-kcal/mol spin-splitting MUEs, with good +transferability to metal-ligand bond length prediction (0.004-5 {\AA} MUE) and +redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature +selection results across property sets reveals the relative importance of +local, electronic descriptors (e.g., electronegativity, atomic number) in +spin-splitting and distal, steric effects in redox potential and bond lengths. +" +"Bipartite Graphs, On-Shell Diagrams, and Bipartite Field Theories: a Computational Package in Mathematica"," We present bipartiteSUSY, a Mathematica package designed to perform +calculations for physical theories based on bipartite graphs. In particular, +the package can employ the recently developed arsenal of techniques surrounding +on-shell diagrams in N=4 SYM scattering amplitudes, including those for +non-planar diagrams, with particular attention to computational speed. It also +contains a host of tools for computations in N=1 Bipartite Field Theories, +which utilize the same bipartite graphs. Through the use of an interactive +graphical tool, it is possible to draw the desired diagrams on the screen and +compute commonly sought-after features. The package should be easily accessible +to users with little or no previous experience in dealing with bipartite graphs +and their combinatorial descriptions. +" +Elucidating the role of Sn-substitution and Pb-$\Box$ in regulating stability and carrier concentration in CH$_3$NH$_3$Pb$_{1-X-Y}$Sn$_X$$\Box_Y$I$_3$," We address the role of Sn-substitution and Pb-vacancy (Pb-$\Box$) in +regulating stability and carrier concentration of +CH$_3$NH$_3$Pb$_{1-X-Y}$Sn$_X$$\Box_Y$I$_3$ perovskite using density functional +theory, where the performance of the exchange-correlation functional is +carefully analyzed, and validated w.r.t. available experimental results. We +find the most stable configuration does not prefer any Pb at 50\% concentration +of Sn. However, the Pb-$\Box$s become unfavourable above 250K due to the +reduced linearity of Sn-I bonds. For n-type host the Sn substitution is more +preferable than Pb-$\Box$ formation, while for p-type host the trend is exactly +opposite. The charge states of both Sn and Pb-$\Box$ are found to be dependent +on the Sn concentration, which in turn alters the perovskite from n-type to +p-type with increasing $X$ ($>$0.5). +" +Semi-Analytic Method for SINS Attitude and Parameters Online Estimation," In this note, the attitude and inertial sensors drift biases estimation for +Strapdown inertial navigation system is investigated. A semi-analytic method is +proposed, which contains two interlaced solution procedures. Specifically, the +attitude encoding the body frame changes and gyroscopes drift biases are +estimated through attitude estimation while the attitude encoding the constant +value at the very start and accelerometers drift biases are determined through +online optimization. +" +"Comment on ""Nucleon spin-averaged forward virtual Compton tensor at large Q^2"""," In recent work, Hill and Paz apply the operator product expansion to forward +doubly virtual Compton scattering. The resulting large-$Q^2$ form of the +amplitude $W_1(0,Q^2)$ is compatible with the one we obtain by extrapolation of +low-$Q^2$ results from a chiral effective field theory, providing support for +our approach. That paper also presents a result for the two-photon contribution +to the Lamb shift in muonic hydrogen that has a much larger uncertainty than in +previous work. We show that this an overestimate arising from the inclusion of +the proton pole term in the subtracted dispersion relation for $W_1$. +" +Graph Theoretic Properties of the Darkweb," We collect and analyze the darkweb (a.k.a. the ""onionweb"") hyperlink graph. +We find properties highly dissimilar to the well-studied world wide web +hyperlink graph; for example, our analysis finds that >87% of darkweb sites +never link to another site. We compare our results to prior work on +world-wide-web and speculate about reasons for their differences. We conclude +that in the term ""darkweb"", the word ""web"" is a connectivity misnomer. Instead, +it is more accurate to view the darkweb as a set of largely isolated dark +silos. +" +A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks," We study the optimal convergence rates for distributed convex optimization +problems over networks, where the objective is to minimize the sum +$\sum_{i=1}^{m}f_i(z)$ of local functions of the nodes in the network. We +provide optimal complexity bounds for four different cases, namely: the case +when each function $f_i$ is strongly convex and smooth, the cases when it is +either strongly convex or smooth and the case when it is convex but neither +strongly convex nor smooth. Our approach is based on the dual of an +appropriately formulated primal problem, which includes the underlying static +graph that models the communication restrictions. Our results show distributed +algorithms that achieve the same optimal rates as their centralized +counterparts (up to constant and logarithmic factors), with an additional cost +related to the spectral gap of the interaction matrix that captures the local +communications of the nodes in the network. +" +Maximizing the Conditional Expected Reward for Reaching the Goal," The paper addresses the problem of computing maximal conditional expected +accumulated rewards until reaching a target state (briefly called maximal +conditional expectations) in finite-state Markov decision processes where the +condition is given as a reachability constraint. Conditional expectations of +this type can, e.g., stand for the maximal expected termination time of +probabilistic programs with non-determinism, under the condition that the +program eventually terminates, or for the worst-case expected penalty to be +paid, assuming that at least three deadlines are missed. The main results of +the paper are (i) a polynomial-time algorithm to check the finiteness of +maximal conditional expectations, (ii) PSPACE-completeness for the threshold +problem in acyclic Markov decision processes where the task is to check whether +the maximal conditional expectation exceeds a given threshold, (iii) a +pseudo-polynomial-time algorithm for the threshold problem in the general +(cyclic) case, and (iv) an exponential-time algorithm for computing the maximal +conditional expectation and an optimal scheduler. +" +Gender differences in lying in sender-receiver games: A meta-analysis," Whether there are gender differences in lying has been largely debated in the +past decade. Previous studies found mixed results. To shed light on this topic, +here I report a meta-analysis of 8,728 distinct observations, collected in 65 +Sender-Receiver game treatments, by 14 research groups. Following previous work +and theoretical considerations, I distinguish three types of lies: black lies, +that benefit the liar at a cost for another person; altruistic white lies, that +benefit another person at a cost for the liar; Pareto white lies, that benefit +both the liar and another person. The results show that gender differences in +lying significantly depend on the consequences of lying. Specifically: (i) +males are significantly more likely than females to tell black lies (N=4,161); +(ii) males are significantly more likely than females to tell altruistic white +(N=2,940); (iii) results are inconclusive in the case of Pareto white lies +(N=1,627). +" +Möbius disjointness along ergodic sequences for uniquely ergodic actions," We show that there are an irrational rotation $Tx=x+\alpha$ on the circle +$\mathbb{T}$ and a continuous $\varphi\colon\mathbb{T}\to\mathbb{R}$ such that +for each (continuous) uniquely ergodic flow +$\mathcal{S}=(S_t)_{t\in\mathbb{R}}$ acting on a compact metric space $Y$, the +automorphism $T_{\varphi,\mathcal{S}}$ acting on $(X\times Y,\mu\otimes\nu)$ by +the formula $T_{\varphi,\mathcal{S}}(x,y)=(Tx,S_{\varphi(x)}(y))$, where $\mu$ +stands for Lebesgue measure on $\mathbb{T}$ and $\nu$ denotes the unique +$\mathcal{S}$-invariant measure, has the property of asymptotically orthogonal +powers. This gives a class of relatively weakly mixing extensions of irrational +rotations for which Sarnak's conjecture on Möbius disjointness holds for all +uniquely ergodic models of $T_{\varphi,\mathcal{S}}$. Moreover, we obtain a +class of ""random"" ergodic sequences $(c_n)\subset\mathbb{Z}$ such that if +$\boldsymbol{\mu}$ denotes the Möbius function, then $$ +\lim_{N\to\infty}\frac1N\sum_{n\leq N}g(S_{c_n}y)\boldsymbol{\mu}(n)=0 $$ for +all (continuous) uniquely ergodic flows $\mathcal{S}$, all $g\in C(Y)$ and +$y\in Y$. +" +Towards Self-organized Large-Scale Shape Formation: A Cognitive Agent-Based Computing Approach," Swarm robotic systems are currently being used to address many real-world +problems. One interesting application of swarm robotics is the self-organized +formation of structures and shapes. Some of the key challenges in the swarm +robotic systems include swarm size constraint, random motion, coordination +among robots, localization, and adaptability in a decentralized environment. +Rubenstein et al. presented a system (""Programmable self-assembly in a +thousand-robot swarm"", Science, 2014) for thousand-robot swarm able to form +only solid shapes with the robots in aggregated form by applying the collective +behavior algorithm. Even though agent-based approaches have been presented in +various studies for self-organized formation, however these studies lack +agent-based modeling (ABM) approach along with the constraints in term of +structure complexity and heterogeneity in large swarms with dynamic +localization. The cognitive agent-based computing (CABC) approach is capable of +modeling such self-organization based multi-agents systems (MAS). In this +paper, we develop a simulation model using ABM under CABC approach for +self-organized shape formation in swarm robots. We propose a shape formation +algorithm for validating our model and perform simulation-based experiments for +six different shapes including hole-based shapes. We also demonstrate the +formal specification for our model. The simulation result shows the robustness +of the proposed approach having the emergent behavior of robots for the +self-organized shape formation. The performance of the proposed approach is +evaluated by robots convergence rate. +" +A Learning Framework for Robust Bin Picking by Customized Grippers," Customized grippers have specifically designed fingers to increase the +contact area with the workpieces and improve the grasp robustness. However, +grasp planning for customized grippers is challenging due to the object +variations, surface contacts and structural constraints of the grippers. In +this paper, we propose a learning framework to plan robust grasps for +customized grippers in real-time. The learning framework contains a low-level +optimization-based planner to search for optimal grasps locally under object +shape variations, and a high-level learning-based explorer to learn the grasp +exploration based on previous grasp experience. The optimization-based planner +uses an iterative surface fitting (ISF) to simultaneously search for optimal +gripper transformation and finger displacement by minimizing the surface +fitting error. The high-level learning-based explorer trains a region-based +convolutional neural network (R-CNN) to propose good optimization regions, +which avoids ISF getting stuck in bad local optima and improves the collision +avoidance performance. The proposed learning framework with RCNN-ISF is able to +consider the structural constraints of the gripper, learn grasp exploration +strategy from previous experience, and plan optimal grasps in clutter +environment in real-time. The effectiveness of the algorithm is verified by +experiments. +" +Optimal encoding in stochastic latent-variable Models," We examine the problem of optimal sparse encoding of sensory stimuli by +latent variables in stochastic models. Analyzing restricted Boltzmann machines +with a communications theory approach, we search for the minimal model size +that correctly conveys the correlations in stimulus patterns in an +information-theoretic sense. We show that the Fisher information Matrix (FIM) +reveals the optimal model size. In larger models the FIM reveals that +irrelevant parameters are associated with individual latent variables, +displaying a surprising amount of order. For well-fit models, we observe the +emergence of statistical criticality as diverging generalized susceptibility of +the model. In this case, an encoding strategy is adopted where highly +informative, but rare stimuli selectively suppress variability in the encoding +units. The information content of the encoded stimuli acts as an unobserved +variable leading to criticality. Together, these results can explain the +stimulus-dependent variability suppression observed in sensory systems, and +suggest a simple, correlation-based measure to reduce the size of artificial +neural networks. +" +Completely Uncoupled Algorithms for Network Utility Maximization," In this paper, we present two completely uncoupled algorithms for utility +maximization. In the first part, we present an algorithm that can be applied +for general non-concave utilities. We show that this algorithm induces a +perturbed (by $\epsilon$) Markov chain, whose stochastically stable states are +the set of actions that maximize the sum utility. In the second part, we +present an approximate sub-gradient algorithm for concave utilities which is +considerably faster and requires lesser memory. We study the performance of the +sub-gradient algorithm for decreasing and fixed step sizes. We show that, for +decreasing step sizes, the Cesaro averages of the utilities converges to a +neighbourhood of the optimal sum utility. For constant step size, we show that +the time average utility converges to a neighbourhood of the optimal sum +utility. Our main contribution is the expansion of the achievable rate region, +which has been not considered in the prior literature on completely uncoupled +algorithms for utility maximization. This expansion aids in allocating a fair +share of resources to the nodes which is important in applications like channel +selection, user association and power control. +" +"Sobolev-Slobodeckij Spaces on Compact Manifolds, Revisited"," In this article we present a coherent rigorous overview of the main +properties of Sobolev-Slobodeckij spaces of sections of vector bundles on +compact manifolds; results of this type are scattered through the literature +and can be difficult to find. A special emphasis has been put on spaces with +noninteger smoothness order, and a special attention has been paid to the +peculiar fact that for a general nonsmooth domain U in Rn, 07%). Beyond boosting +efficiency, triplet loss brings retrieval and interpretability to +classification models. +" +Suboptimum Low Complexity Joint Multi-target Detection and Localization for Noncoherent MIMO Radar with Widely Separated Antennas," In this paper, the problems of simultaneously detecting and localizing +multiple targets are considered for noncoherent multiple-input multiple-output +(MIMO) radar with widely separated antennas. By assuming a prior knowledge of +target number, an optimal solution to this problem is presented first. It is +essentially a maximum-likelihood (ML) estimator searching parameters of +interest in a high dimensional space. However, the complexity of this method +increases exponentially with the number G of targets.Besides, without the prior +information of the number of targets, a multi-hypothesis testing strategy to +determine the number of targets is required, which further complicates this +method. Therefore, we split the joint maximization into G disjoint optimization +problems by clearing the interference from previously declared targets. In this +way, we derive two fast and robust suboptimal solutions which allow trading +performance for a much lower implementation complexity which is almost +independent of the number of targets. In addition, the multi-hypothesis testing +is no longer required when target number is unknown. Simulation results show +the proposed algorithms can correctly detect and accurately localize multiple +targets even when targets share common range bins in some paths. +" +Graph Convolutional Encoders for Syntax-aware Neural Machine Translation," We present a simple and effective approach to incorporating syntactic +structure into neural attention-based encoder-decoder models for machine +translation. We rely on graph-convolutional networks (GCNs), a recent class of +neural networks developed for modeling graph-structured data. Our GCNs use +predicted syntactic dependency trees of source sentences to produce +representations of words (i.e. hidden states of the encoder) that are sensitive +to their syntactic neighborhoods. GCNs take word representations as input and +produce word representations as output, so they can easily be incorporated as +layers into standard encoders (e.g., on top of bidirectional RNNs or +convolutional neural networks). We evaluate their effectiveness with +English-German and English-Czech translation experiments for different types of +encoders and observe substantial improvements over their syntax-agnostic +versions in all the considered setups. +" +Realization of Biquadratic Impedance as Five-Element Bridge Networks," This report includes the original manuscript and the supplementary material +of ""Realization of Biquadratic Impedance as Five-Element Bridge Networks"". +" +Towards understanding feedback from supermassive black holes using convolutional neural networks," Supermassive black holes at centers of clusters of galaxies strongly interact +with their host environment via AGN feedback. Key tracers of such activity are +X-ray cavities -- regions of lower X-ray brightness within the cluster. We +present an automatic method for detecting, and characterizing X-ray cavities in +noisy, low-resolution X-ray images. We simulate clusters of galaxies, insert +cavities into them, and produce realistic low-quality images comparable to +observations at high redshifts. We then train a custom-built convolutional +neural network to generate pixel-wise analysis of presence of cavities in a +cluster. A ResNet architecture is then used to decode radii of cavities from +the pixel-wise predictions. We surpass the accuracy, stability, and speed of +current visual inspection based methods on simulated data. +" +Integration of 5G Technologies in LEO Mega-Constellations," 3GPP is finalising the first release of the 5G New Radio physical layer. To +cope with the demanding 5G requirements on global connectivity and large +throughput, Satellite Communications might be a valuable resource to extend and +complement terrestrial networks. In this context, we introduce an integrated +architecture for 5G-based LEO mega-constellations and assess the impact of +large Doppler shifts and delays on both the 5G waveform and the PHY/MAC layer +procedures. +" +First search for gravitational waves from known pulsars with Advanced LIGO," We present the result of searches for gravitational waves from 200 pulsars +using data from the first observing run of the Advanced LIGO detectors. We find +no significant evidence for a gravitational-wave signal from any of these +pulsars, but we are able to set the most constraining upper limits yet on their +gravitational-wave amplitudes and ellipticities. For eight of these pulsars, +our upper limits give bounds that are improvements over the indirect spin-down +limit values. For another 32, we are within a factor of 10 of the spin-down +limit, and it is likely that some of these will be reachable in future runs of +the advanced detector. Taken as a whole, these new results improve on previous +limits by more than a factor of two. +" +Observation of Various and Spontaneous Magnetic Skyrmionic Bubbles at Room-Temperature in a Frustrated Kagome Magnet with Uniaxial Magnetic Anisotropy," Various and spontaneous magnetic skyrmionic bubbles are experimentally +observed for the first time, at room temperature in a frustrated kagome magnet +Fe3Sn2 with unixial magnetic anisotropy. The magnetization dynamics were +investigated using in-situ Lorentz transmission electron microscopy, revealing +that the transformation between different magnetic bubbles and domains are via +the motion of Bloch lines driven by applied external magnetic field. The +results demonstrate that Fe3Sn2 facilitates a unique magnetic control of +topological spin textures at room temperature, making it a promising candidate +for further skyrmion-based spintronic devices. +" +Quantum Correlations in Nonlocal BosonSampling," Determination of the quantum nature of correlations between two spatially +separated systems plays a crucial role in quantum information science. Of +particular interest is the questions of if and how these correlations enable +quantum information protocols to be more powerful. Here, we report on a +distributed quantum computation protocol in which the input and output quantum +states are considered to be classically correlated in quantum informatics. +Nevertheless, we show that the correlations between the outcomes of the +measurements on the output state cannot be efficiently simulated using +classical algorithms. Crucially, at the same time, local measurement outcomes +can be efficiently simulated on classical computers. We show that the only +known classicality criterion violated by the input and output states in our +protocol is the one used in quantum optics, namely, phase-space +nonclassicality. As a result, we argue that the global phase-space +nonclassicality inherent within the output state of our protocol represents +true quantum correlations. +" +A precise deuterium abundance: Re-measurement of the z=3.572 absorption system towards the quasar PKS1937-101," The primordial deuterium abundance probes fundamental physics during the Big +Bang Nucleosynthesis and can be used to infer cosmological parameters. +Observationally, the abundance can be measured using absorbing clouds along the +lines of sight to distant quasars. Observations of the quasar PKS1937--101 +contain two absorbers for which the deuterium abundance has previously been +determined. Here we focus on the higher redshift one at $z_{abs} = 3.572$. We +present new observations with significantly increased signal-to-noise ratio +which enable a far more precise and robust measurement of the deuterium to +hydrogen column density ratio, resulting in D/H = $2.62\pm0.05\times10^{-5}$. +This particular measurement is of interest because it is among the most precise +assessments to date and it has been derived from the second lowest +column-density absorber [N(HI) $=17.9\mathrm{cm}^{-2}$] that has so-far been +utilised for deuterium abundance measurements. The majority of existing +high-precision measurements were obtained from considerably higher column +density systems [i.e. N(HI) $>19.4\mathrm{cm}^{-2}$]. This bodes well for +future observations as low column density systems are more common. +" +Effect of Marangoni stress on the bulk rheology of a dilute emulsion of surfactant-laden deformable droplets in linear flows," In the present study we analytically investigate the deformation and bulk +rheology of a dilute emulsion of surfactant-laden droplets suspended in a +linear flow. We use an asymptotic approach to predict the effect of surfactant +distribution on the deformation of a single droplet as well as the effective +shear and extensional viscosity for the dilute emulsion. The non-uniform +distribution of surfactants due to the bulk flow results in the generation of a +Marangoni stress which affects both the deformation as well as the bulk +rheology of the suspension. The present analysis is done for the limiting case +when the surfactant transport is dominated by the surface diffusion relative to +surface convection. As an example, we have used two commonly encountered bulk +flows, namely, uniaxial extensional flow and simple shear flow. With the +assumption of negligible inertial forces present in either of the phases, we +are able to show that both the surfactant concentration on the droplet surface +as well as the ratio of viscosity of the droplet phase with respect to the +suspending fluid has a significant effect on the droplet deformation as well as +the bulk rheology. It is seen that increase in the non-uniformity in surfactant +distribution on the droplet surface results in a higher droplet deformation and +a higher effective viscosity for either of linear flows considered. For the +case of simple shear flow, surfactant distribution is found to have no effect +on the inclination angle, however, a higher viscosity ratio predicts the +droplet to be more aligned towards the direction of flow. +" +Families of spherical surfaces and harmonic maps," We study singularities of constant positive Gaussian curvature surfaces and +determine the way they bifurcate in generic 1-parameter families of such +surfaces. We construct the bifurcations explicitly using loop group methods. +Constant Gaussian curvature surfaces correspond to harmonic maps, and we +examine the relationship between the two types of maps and their singularities. +Finally, we determine which finitely A-determined map-germs from the plane to +the plane can be represented by harmonic maps. +" +Analyzing Privacy Breaches in the Aircraft Communications Addressing and Reporting System (ACARS)," The manner in which Aircraft Communications, Addressing and Reporting System +(ACARS) is being used has significantly changed over time. Whilst originally +used by commercial airliners to track their flights and provide automated +timekeeping on crew, today it serves as a multi-purpose air-ground data link +for many aviation stakeholders including private jet owners, state actors and +military. Since ACARS messages are still mostly sent in the clear over a +wireless channel, any sensitive information sent with ACARS can potentially +lead to a privacy breach for users. Naturally, different stakeholders consider +different types of data sensitive. In this paper we propose a privacy framework +matching aviation stakeholders to a range of sensitive information types and +assess the impact for each. Based on more than one million ACARS messages, +collected over several months, we then demonstrate that current ACARS usage +systematically breaches privacy for all stakeholder groups. We further support +our findings with a number of cases of significant privacy issues for each +group and analyze the impact of such leaks. While it is well-known that ACARS +messages are susceptible to eavesdropping attacks, this work is the first to +quantify the extent and impact of privacy leakage in the real world for the +relevant aviation stakeholders. +" +Self-supporting structure design in additive manufacturing through explicit topology optimization," One of the challenging issues in additive manufacturing (AM) oriented +topology optimization is how to design structures that are self-supportive in a +manufacture process without introducing additional supporting materials. In the +present contribution, it is intended to resolve this problem under an explicit +topology optimization framework where optimal structural topology can be found +by optimizing a set of explicit geometry parameters. Two solution approaches +established based on the Moving Morphable Components (MMC) and Moving Morphable +Voids (MMV) frameworks, respectively, are proposed and some theoretical issues +associated with AM oriented topology optimization are also analyzed. Numerical +examples provided demonstrate the effectiveness of the proposed methods. +" +Calibration and performance studies of the balloon-borne hard X-ray polarimeter PoGO+," Polarimetric observations of celestial sources in the hard X-ray band stand +to provide new information on emission mechanisms and source geometries. PoGO+ +is a Compton scattering polarimeter (20-150 keV) optimised for the observation +of the Crab (pulsar and wind nebula) and Cygnus X-1 (black hole binary), from a +stratospheric balloon-borne platform launched from the Esrange Space Centre in +summer 2016. Prior to flight, the response of the polarimeter has been studied +with polarised and unpolarised X-rays allowing a Geant4-based simulation model +to be validated. The expected modulation factor for Crab observations is found +to be $M_{\mathrm{Crab}}=(41.75\pm0.85)\%$, resulting in an expected Minimum +Detectable Polarisation (MDP) of $7.3\%$ for a 7 day flight. This will allow a +measurement of the Crab polarisation parameters with at least $5\sigma$ +statistical significance assuming a polarisation fraction $\sim20\%$ $-$ a +significant improvement over the PoGOLite Pathfinder mission which flew in 2013 +and from which the PoGO+ design is developed. +" +The size of the last merger and time reversal in $Λ$-coalescents," We consider the number of blocks involved in the last merger of a +$\Lambda$-coalescent started with $n$ blocks. We give conditions under which, +as $n \to \infty$, the sequence of these random variables a) is tight, b) +converges in distribution to a finite random variable or c) converges to +infinity in probability. Our conditions are optimal for $\Lambda$-coalescents +that have a dust component. For general $\Lambda$, we relate the three cases to +the existence, uniqueness and non-existence of quasi-invariant measures for the +dynamics of the block-counting process, and in case b) investigate the +time-reversal of the block-counting process back from the time of the last +merger. +" +Security Consideration For Deep Learning-Based Image Forensics," Recently, image forensics community has paied attention to the research on +the design of effective algorithms based on deep learning technology and facts +proved that combining the domain knowledge of image forensics and deep learning +would achieve more robust and better performance than the traditional schemes. +Instead of improving it, in this paper, the safety of deep learning based +methods in the field of image forensics is taken into account. To the best of +our knowledge, this is a first work focusing on this topic. Specifically, we +experimentally find that the method using deep learning would fail when adding +the slight noise into the images (adversarial images). Furthermore, two kinds +of strategys are proposed to enforce security of deep learning-based method. +Firstly, an extra penalty term to the loss function is added, which is referred +to the 2-norm of the gradient of the loss with respect to the input images, and +then an novel training method are adopt to train the model by fusing the normal +and adversarial images. Experimental results show that the proposed algorithm +can achieve good performance even in the case of adversarial images and provide +a safety consideration for deep learning-based image forensics +" +Dependent Microstructure Noise and Integrated Volatility Estimation from High-Frequency Data," In this paper, we develop econometric tools to analyze the integrated +volatility of the efficient price and the dynamic properties of microstructure +noise in high-frequency data under general dependent noise. We first develop +consistent estimators of the variance and autocovariances of noise using a +variant of realized volatility. Next, we employ these estimators to adapt the +pre-averaging method and derive a consistent estimator of the integrated +volatility, which converges stably to a mixed Gaussian distribution at the +optimal rate $n^{1/4}$. To refine the finite sample performance, we propose a +two-step approach that corrects the finite sample bias, which turns out to be +crucial in applications. Our extensive simulation studies demonstrate the +excellent performance of our two-step estimators. In an empirical study, we +characterize the dependence structures of microstructure noise in several +popular sampling schemes and provide intuitive economic interpretations; we +also illustrate the importance of accounting for both the serial dependence in +noise and the finite sample bias when estimating integrated volatility. +" +Dynamically evolved community size and stability of random Lotka-Volterra ecosystems," We use dynamical generating functionals to study the stability and size of +communities evolving in Lotka-Volterra systems with random interaction +coefficients. The size of the eco-system is not set from the beginning. +Instead, we start from a set of possible species, which may undergo extinction. +How many species survive depends on the properties of the interaction matrix; +the size of the resulting food web at stationarity is a property of the system +itself in our model, and not a control parameter as in most studies based on +random matrix theory. We find that prey-predator relations enhance stability, +and that variability of species interactions promotes instability. Complexity +of inter-species couplings leads to reduced sizes of ecological communities. +Dynamically evolved community size and stability are hence positively +correlated. +" +Rate Constants for Fine-Structure Excitations in O-H Collisions with Error Bars Obtained by Machine Learning," We present an approach using a combination of coupled channel scattering +calculations with a machine- learning technique based on Gaussian Process +regression to determine the sensitivity of the rate constants for non-adiabatic +transitions in inelastic atomic collisions to variations of the underlying +adiabatic interaction potentials. Using this approach, we improve the previous +computations of the rate constants for the fine-structure transitions in +collisions of O(3Pj) with atomic H. We compute the error bars of the rate +constants corresponding to 20 % variations of the ab initio potentials and show +that this method can be used to determine which of the individual adiabatic +potentials are more or less important for the outcome of different +fine-structure changing collisions. +" +DKN: Deep Knowledge-Aware Network for News Recommendation," Online news recommender systems aim to address the information explosion of +news and make personalized recommendation for users. In general, news language +is highly condensed, full of knowledge entities and common sense. However, +existing methods are unaware of such external knowledge and cannot fully +discover latent knowledge-level connections among news. The recommended results +for a user are consequently limited to simple patterns and cannot be extended +reasonably. Moreover, news recommendation also faces the challenges of high +time-sensitivity of news and dynamic diversity of users' interests. To solve +the above problems, in this paper, we propose a deep knowledge-aware network +(DKN) that incorporates knowledge graph representation into news +recommendation. DKN is a content-based deep recommendation framework for +click-through rate prediction. The key component of DKN is a multi-channel and +word-entity-aligned knowledge-aware convolutional neural network (KCNN) that +fuses semantic-level and knowledge-level representations of news. KCNN treats +words and entities as multiple channels, and explicitly keeps their alignment +relationship during convolution. In addition, to address users' diverse +interests, we also design an attention module in DKN to dynamically aggregate a +user's history with respect to current candidate news. Through extensive +experiments on a real online news platform, we demonstrate that DKN achieves +substantial gains over state-of-the-art deep recommendation models. We also +validate the efficacy of the usage of knowledge in DKN. +" +Counting Roots of Polynomials over $\mathbb{Z}/p^2\mathbb{Z}$," Until recently, the only known method of finding the roots of polynomials +over prime power rings, other than fields, was brute force. One reason for this +is the lack of a division algorithm, obstructing the use of greatest common +divisors. Fix a prime $p \in \mathbb{Z}$ and $f \in ( \mathbb{Z}/p^n \mathbb{Z} +) [x]$ any nonzero polynomial of degree $d$ whose coefficients are not all +divisible by $p$. For the case $n=2$, we prove a new efficient algorithm to +count the roots of $f$ in $\mathbb{Z}/p^2\mathbb{Z}$ within time polynomial in +$(d+\operatorname{size}(f)+\log{p})$, and record a concise formula for the +number of roots, formulated by Cheng, Gao, Rojas, and Wan. +" +High-harmonic generation in amorphous solids," High-order harmonic generation (HHG) in isolated atoms and molecules has been +widely utilized in extreme ultraviolet (XUV) photonics and attosecond pulse +metrology. Recently, HHG has also been observed in solids, which could lead to +important applications such as all-optical methods to image valance charge +density and reconstruction of electronic band structures, as well as compact +XUV light sources. Previous HHG studies are confined on crystalline solids; +therefore decoupling the respective roles of long-range periodicity and high +density has been challenging. Here, we report the first observation of HHG from +amorphous fused silica. We decouple the role of long-range periodicity by +comparing with crystal quartz, which contains same atomic constituents but +exhibits long-range periodicity. Our results advance current understanding of +strong-field processes leading to high harmonic generation in solids with +implications in robust and compact coherent XUV light sources. +" +True Asymptotic Natural Gradient Optimization," We introduce a simple algorithm, True Asymptotic Natural Gradient +Optimization (TANGO), that converges to a true natural gradient descent in the +limit of small learning rates, without explicit Fisher matrix estimation. +For quadratic models the algorithm is also an instance of averaged stochastic +gradient, where the parameter is a moving average of a ""fast"", constant-rate +gradient descent. TANGO appears as a particular de-linearization of averaged +SGD, and is sometimes quite different on non-quadratic models. This further +connects averaged SGD and natural gradient, both of which are arguably optimal +asymptotically. +In large dimension, small learning rates will be required to approximate the +natural gradient well. Still, this shows it is possible to get arbitrarily +close to exact natural gradient descent with a lightweight algorithm. +" +Automated bird sound recognition in realistic settings," We evaluated the effectiveness of an automated bird sound identification +system in a situation that emulates a realistic, typical application. We +trained classification algorithms on a crowd-sourced collection of bird audio +recording data and restricted our training methods to be completely free of +manual intervention. The approach is hence directly applicable to the analysis +of multiple species collections, with labelling provided by crowd-sourced +collection. We evaluated the performance of the bird sound recognition system +on a realistic number of candidate classes, corresponding to real conditions. +We investigated the use of two canonical classification methods, chosen due to +their widespread use and ease of interpretation, namely a k Nearest Neighbour +(kNN) classifier with histogram-based features and a Support Vector Machine +(SVM) with time-summarisation features. We further investigated the use of a +certainty measure, derived from the output probabilities of the classifiers, to +enhance the interpretability and reliability of the class decisions. Our +results demonstrate that both identification methods achieved similar +performance, but we argue that the use of the kNN classifier offers somewhat +more flexibility. Furthermore, we show that employing an outcome certainty +measure provides a valuable and consistent indicator of the reliability of +classification results. Our use of generic training data and our investigation +of probabilistic classification methodologies that can flexibly address the +variable number of candidate species/classes that are expected to be +encountered in the field, directly contribute to the development of a practical +bird sound identification system with potentially global application. Further, +we show that certainty measures associated with identification outcomes can +significantly contribute to the practical usability of the overall system. +" +Light-enhanced electron-phonon coupling from nonlinear electron-phonon coupling," We investigate an exact nonequilibrium solution of a two-site electron-phonon +model, where an infrared-active phonon that is nonlinearly coupled to the +electrons is driven by a laser field. The time-resolved electronic spectrum +shows coherence-incoherence spectral weight transfer, a clear signature of +light-enhanced electron-phonon coupling. The present study is motivated by +recent evidence for enhanced electron-phonon coupling in pump-probe TeraHertz +and angle-resolved photoemission spectroscopy in bilayer graphene when driven +near resonance with an infrared-active phonon mode [E.~Pomarico et al., +Phys.~Rev.~B 95, 024304 (2017)], and by a theoretical study suggesting that +transient electronic attraction arises from nonlinear electron-phonon coupling +[D.~M.~Kennes et al., Nature Physics (2017), 10.1038/nphys4024]. We show that a +linear scaling of light-enhanced electron-phonon coupling with the pump field +intensity emerges, in accordance with a time-nonlocal self-energy based on a +mean-field decoupling using quasi-classical phonon coherent states. Finally we +demonstrate that this leads to enhanced double occupancies in accordance with +an effective electron-electron attraction. Our results suggest that materials +with strong phonon nonlinearities provide an ideal playground to achieve +light-enhanced electron-phonon coupling and possibly light-induced +superconductivity. +" +New Results On Routing Via Matchings On Graphs," In this paper we present some new complexity results on the routing time of a +graph under the \textit{routing via matching} model. This is a parallel routing +model which was introduced by Alon et al\cite{alon1994routing}. The model can +be viewed as a communication scheme on a distributed network. The nodes in the +network can communicate via matchings (a step), where a node exchanges data +(pebbles) with its matched partner. Let $G$ be a connected graph with vertices +labeled from $\{1,...,n\}$ and the destination vertices of the pebbles are +given by a permutation $\pi$. The problem is to find a minimum step routing +scheme for the input permutation $\pi$. This is denoted as the routing time +$rt(G,\pi)$ of $G$ given $\pi$. In this paper we characterize the complexity of +some known problems under the routing via matching model and discuss their +relationship to graph connectivity and clique number. We also introduce some +new problems in this domain, which may be of independent interest. +" +The solution of many electron problems arising when radiation pulses interact with closed or open shell multielectron atomic and molecular states," The paper reviews briefly elements from our work on the solution of various +field induced, time independent or time dependent many electron problems, which +has been developed and carried out within state and property specific +frameworks. The discussion focuses on only a few items, and is presented as a +commentary with explanations. The formal details and the numerical results can +be found in the cited publications. +" +Size Matters: Cardinality-Constrained Clustering and Outlier Detection via Conic Optimization," Plain vanilla K-means clustering has proven to be successful in practice, yet +it suffers from outlier sensitivity and may produce highly unbalanced clusters. +To mitigate both shortcomings, we formulate a joint outlier detection and +clustering problem, which assigns a prescribed number of datapoints to an +auxiliary outlier cluster and performs cardinality-constrained K-means +clustering on the residual dataset, treating the cluster cardinalities as a +given input. We cast this problem as a mixed-integer linear program (MILP) that +admits tractable semidefinite and linear programming relaxations. We propose +deterministic rounding schemes that transform the relaxed solutions to feasible +solutions for the MILP. We also prove that these solutions are optimal in the +MILP if a cluster separation condition holds. +" +Eigenvector Under Random Perturbation: A Nonasymptotic Rayleigh-Schrödinger Theory," Rayleigh-Schrödinger perturbation theory is a well-known theory in +quantum mechanics and it offers useful characterization of eigenvectors of a +perturbed matrix. Suppose $A$ and perturbation $E$ are both Hermitian matrices, +$A^t = A + tE$, $\{\lambda_j\}_{j=1}^n$ are eigenvalues of $A$ in descending +order, and $u_1, u^t_1$ are leading eigenvectors of $A$ and $A^t$. +Rayleigh-Schrödinger theory shows asymptotically, $\langle u^t_1, u_j +\rangle \propto t / (\lambda_1 - \lambda_j)$ where $ t = o(1)$. However, the +asymptotic theory does not apply to larger $t$; in particular, it fails when $ +t \| E \|_2 > \lambda_1 - \lambda_2$. In this paper, we present a nonasymptotic +theory with $E$ being a random matrix. We prove that, when $t = 1$ and $E$ has +independent and centered subgaussian entries above its diagonal, with high +probability, \begin{equation*} | \langle u^1_1, u_j \rangle | = O(\sqrt{\log n} +/ (\lambda_1 - \lambda_j)), \end{equation*} for all $j>1$ simultaneously, under +a condition on eigenvalues of $A$ that involves all gaps $\lambda_1 - +\lambda_j$. This bound is valid, even in cases where $\| E \|_2 \gg \lambda_1 - +\lambda_2$. The result is optimal, except for a log term. It also leads to an +improvement of Davis-Kahan theorem. +" +Effective Resistance Preserving Directed Graph Symmetrization," This work presents a new method for symmetrization of directed graphs that +constructs an undirected graph with equivalent pairwise effective resistances +as a given directed graph. Consequently a graph metric, square root of +effective resistance, is preserved between the directed graph and its +symmetrized version. It is shown that the preservation of this metric allows +for interpretation of algebraic and spectral properties of the symmetrized +graph in the context of the directed graph, due to the relationship between +effective resistance and the Laplacian spectrum. Additionally, Lyapunov theory +is used to demonstrate that the Laplacian matrix of a directed graph can be +decomposed into the product of a projection matrix, a skew symmetric matrix, +and the Laplacian matrix of the symmetrized graph. The application of effective +resistance preserving graph symmetrization is discussed in the context of +spectral graph partitioning and Kron reduction of directed graphs. +" +Dual-label Deep LSTM Dereverberation For Speaker Verification," In this paper, we present a reverberation removal approach for speaker +verification, utilizing dual-label deep neural networks (DNNs). The networks +perform feature mapping between the spectral features of reverberant and clean +speech. Long short term memory recurrent neural networks (LSTMs) are trained to +map corrupted Mel filterbank (MFB) features to two sets of labels: i) the clean +MFB features, and ii) either estimated pitch tracks or the fast Fourier +transform (FFT) spectrogram of clean speech. The performance of reverberation +removal is evaluated by equal error rates (EERs) of speaker verification +experiments. +" +Star Routing: Between Vehicle Routing and Vertex Cover," We consider an optimization problem posed by an actual newspaper company, +which consists of computing a minimum length route for a delivery truck, such +that the driver only stops at street crossings, each time delivering copies to +all customers adjacent to the crossing. This can be modeled as an abstract +problem that takes an unweighted simple graph $G = (V, E)$ and a subset of +edges $X$ and asks for a shortest cycle, not necessarily simple, such that +every edge of $X$ has an endpoint in the cycle. +We show that the decision version of the problem is strongly NP-complete, +even if $G$ is a grid graph. Regarding approximate solutions, we show that the +general case of the problem is APX-hard, and thus no PTAS is possible unless P +$=$ NP. Despite the hardness of approximation, we show that given any +$\alpha$-approximation algorithm for metric TSP, we can build a +$3\alpha$-approximation algorithm for our optimization problem, yielding a +concrete $9/2$-approximation algorithm. +The grid case is of particular importance, because it models a city map or +some part of it. A usual scenario is having some neighborhood full of +customers, which translates as an instance of the abstract problem where almost +every edge of $G$ is in $X$. We model this property as $|E - X| = o(|E|)$, and +for these instances we give a $(3/2 + \varepsilon)$-approximation algorithm, +for any $\varepsilon > 0$, provided that the grid is sufficiently big. +" +Additive manufacturing of magnetic shielding and ultra-high vacuum flange for cold atom sensors," Recent advances in the understanding and control of quantum technologies, +such as those based on cold atoms, have resulted in devices with extraordinary +metrological sensitivities. To realise this potential outside of a lab +environment the size, weight and power consumption need to be reduced. Here we +demonstrate the use of laser powder bed fusion, an additive manufacturing +technique, as a production technique for the components that make up quantum +sensors. As a demonstration we have constructed two key components using +additive manufacturing, namely magnetic shielding and vacuum chambers. The +initial prototypes for magnetic shields show shielding factors within a factor +of 3 of conventional approaches. The vacuum demonstrator device shows that +3D-printed titanium structures are suitable for use as vacuum chambers, with +the test system reaching base pressures of $5 \pm 0.5 \times 10^{-10}$ mbar. +These demonstrations show considerable promise for the use of additive +manufacturing for cold atom based quantum technologies, in future enabling +improved integrated structures, allowing for the reduction in size, weight and +assembly complexity. +" +EEG-GAN: Generative adversarial networks for electroencephalograhic (EEG) brain signals," Generative adversarial networks (GANs) are recently highly successful in +generative applications involving images and start being applied to time series +data. Here we describe EEG-GAN as a framework to generate +electroencephalographic (EEG) brain signals. We introduce a modification to the +improved training of Wasserstein GANs to stabilize training and investigate a +range of architectural choices critical for time series generation (most +notably up- and down-sampling). For evaluation we consider and compare +different metrics such as Inception score, Frechet inception distance and +sliced Wasserstein distance, together showing that our EEG-GAN framework +generated naturalistic EEG examples. It thus opens up a range of new generative +application scenarios in the neuroscientific and neurological context, such as +data augmentation in brain-computer interfacing tasks, EEG super-sampling, or +restoration of corrupted data segments. The possibility to generate signals of +a certain class and/or with specific properties may also open a new avenue for +research into the underlying structure of brain signals. +" +Conditional Generative Adversarial Networks for Speech Enhancement and Noise-Robust Speaker Verification," Improving speech system performance in noisy environments remains a +challenging task, and speech enhancement (SE) is one of the effective +techniques to solve the problem. Motivated by the promising results of +generative adversarial networks (GANs) in a variety of image processing tasks, +we explore the potential of conditional GANs (cGANs) for SE, and in particular, +we make use of the image processing framework proposed by Isola et al. [1] to +learn a mapping from the spectrogram of noisy speech to an enhanced +counterpart. The SE cGAN consists of two networks, trained in an adversarial +manner: a generator that tries to enhance the input noisy spectrogram, and a +discriminator that tries to distinguish between enhanced spectrograms provided +by the generator and clean ones from the database using the noisy spectrogram +as a condition. We evaluate the performance of the cGAN method in terms of +perceptual evaluation of speech quality (PESQ), short-time objective +intelligibility (STOI), and equal error rate (EER) of speaker verification (an +example application). Experimental results show that the cGAN method overall +outperforms the classical short-time spectral amplitude minimum mean square +error (STSA-MMSE) SE algorithm, and is comparable to a deep neural +network-based SE approach (DNN-SE). +" +The (Un)reliability of saliency methods," Saliency methods aim to explain the predictions of deep neural networks. +These methods lack reliability when the explanation is sensitive to factors +that do not contribute to the model prediction. We use a simple and common +pre-processing step ---adding a constant shift to the input data--- to show +that a transformation with no effect on the model can cause numerous methods to +incorrectly attribute. In order to guarantee reliability, we posit that methods +should fulfill input invariance, the requirement that a saliency method mirror +the sensitivity of the model with respect to transformations of the input. We +show, through several examples, that saliency methods that do not satisfy input +invariance result in misleading attribution. +" +Exploiting Sparsity in the Coefficient Matching Conditions in Sum-of-Squares Programming using ADMM," This paper introduces an efficient first-order method based on the +alternating direction method of multipliers (ADMM) to solve semidefinite +programs (SDPs) arising from sum-of-squares (SOS) programming. We exploit the +sparsity of the \emph{coefficient matching conditions} when SOS programs are +formulated in the usual monomial basis to reduce the computational cost of the +ADMM algorithm. Each iteration of our algorithm requires one projection onto +the positive semidefinite cone and the solution of multiple quadratic programs +with closed-form solutions free of any matrix inversion. Our techniques are +implemented in the open-source MATLAB solver SOSADMM. Numerical experiments on +SOS problems arising from unconstrained polynomial minimization and from +Lyapunov stability analysis for polynomial systems show speed-ups compared to +the interior-point solver SeDuMi, and the first-order solver CDCS. +" +Time Stretch Inspired Computational Imaging," We show that dispersive propagation of light followed by phase detection has +properties that can be exploited for extracting features from the waveforms. +This discovery is spearheading development of a new class of physics-inspired +algorithms for feature extraction from digital images with unique properties +and superior dynamic range compared to conventional algorithms. In certain +cases, these algorithms have the potential to be an energy efficient and +scalable substitute to synthetically fashioned computational techniques in +practice today. +" +The ladder physics in the Spin Fermion model," A link is established between the spin-fermion (SF) model of the cuprates and +the approach based on the analogy between the physics of doped Mott insulators +in two dimensions and the physics of fermionic ladders. This enables one to use +nonperturbative results derived for fermionic ladders to move beyond the +large-N approximation in the SF model. It is shown that the paramagnon exchange +postulated in the SF model has exactly the right form to facilitate the +emergence of the fully gapped d-Mott state in the region of the Brillouin zone +at the hot spots of the Fermi surface. Hence the SF model provides an adequate +description of the pseudogap. +" +A Bayesian framework for molecular strain identification from mixed diagnostic samples," We provide a mathematical formulation and develop a computational framework +for identifying multiple strains of microorganisms from mixed samples of DNA. +Our method is applicable in public health domains where efficient +identification of pathogens is paramount, e.g., for the monitoring of disease +outbreaks. We formulate strain identification as an inverse problem that aims +at simultaneously estimating a binary matrix (encoding presence or absence of +mutations in each strain) and a real-valued vector (representing the mixture of +strains) such that their product is approximately equal to the measured data +vector. The problem at hand has a similar structure to blind deconvolution, +except for the presence of binary constraints, which we enforce in our +approach. Following a Bayesian approach, we derive a posterior density. We +present two computational methods for solving the non-convex maximum a +posteriori estimation problem. The first one is a local optimization method +that is made efficient and scalable by decoupling the problem into smaller +independent subproblems, whereas the second one yields a global minimizer by +converting the problem into a convex mixed-integer quadratic programming +problem. The decoupling approach also provides an efficient way to integrate +over the posterior. This provides useful information about the ambiguity of the +underdetermined problem and, thus, the uncertainty associated with numerical +solutions. We evaluate the potential and limitations of our framework in silico +using synthetic and experimental data with available ground truths. +" +Toward transient finite element simulation of thermal deformation of machine tools in real-time," Finite element models without simplifying assumptions can accurately describe +the spatial and temporal distribution of heat in machine tools as well as the +resulting deformation. In principle, this allows to correct for displacements +of the Tool Centre Point and enables high precision manufacturing. However, the +computational cost of FEM models and restriction to generic algorithms in +commercial tools like ANSYS prevents their operational use since simulations +have to run faster than real-time. For the case where heat diffusion is slow +compared to machine movement, we introduce a tailored implicit-explicit +multi-rate time stepping method of higher order based on spectral deferred +corrections. Using the open-source FEM library DUNE, we show that fully coupled +simulations of the temperature field are possible in real-time for a machine +consisting of a stock sliding up and down on rails attached to a stand. +" +Enhancing speed and scalability of the ParFlow simulation code," Regional hydrology studies are often supported by high resolution simulations +of subsurface flow that require expensive and extensive computations. Efficient +usage of the latest high performance parallel computing systems becomes a +necessity. The simulation software ParFlow has been demonstrated to meet this +requirement and shown to have excellent solver scalability for up to 16,384 +processes. In the present work we show that the code requires further +enhancements in order to fully take advantage of current petascale machines. We +identify ParFlow's way of parallelization of the computational mesh as a +central bottleneck. We propose to reorganize this subsystem using fast mesh +partition algorithms provided by the parallel adaptive mesh refinement library +p4est. We realize this in a minimally invasive manner by modifying selected +parts of the code to reinterpret the existing mesh data structures. We evaluate +the scaling performance of the modified version of ParFlow, demonstrating good +weak and strong scaling up to 458k cores of the Juqueen supercomputer, and test +an example application at large scale. +" +On the rank of universal quadratic forms over real quadratic fields," We study the minimal number of variables required by a totally positive +definite diagonal universal quadratic form over a real quadratic field $\mathbb +Q(\sqrt D)$ and obtain lower and upper bounds for it in terms of certain sums +of coefficients of the associated continued fraction. We also estimate such +sums in terms of $D$ and establish a link between continued fraction expansions +and special values of $L$-functions in the spirit of Kronecker's limit formula. +" +"Deep Learning for Medical Image Processing: Overview, Challenges and Future"," Healthcare sector is totally different from other industry. It is on high +priority sector and people expect highest level of care and services regardless +of cost. It did not achieve social expectation even though it consume huge +percentage of budget. Mostly the interpretations of medical data is being done +by medical expert. In terms of image interpretation by human expert, it is +quite limited due to its subjectivity, the complexity of the image, extensive +variations exist across different interpreters, and fatigue. After the success +of deep learning in other real world application, it is also providing exciting +solutions with good accuracy for medical imaging and is seen as a key method +for future applications in health secotr. In this chapter, we discussed state +of the art deep learning architecture and its optimization used for medical +image segmentation and classification. In the last section, we have discussed +the challenges deep learning based methods for medical imaging and open +research issue. +" +A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning," Automatic decision-making approaches, such as reinforcement learning (RL), +have been applied to (partially) solve the resource allocation problem +adaptively in the cloud computing system. However, a complete cloud resource +allocation framework exhibits high dimensions in state and action spaces, which +prohibit the usefulness of traditional RL techniques. In addition, high power +consumption has become one of the critical concerns in design and control of +cloud computing systems, which degrades system reliability and increases +cooling cost. An effective dynamic power management (DPM) policy should +minimize power consumption while maintaining performance degradation within an +acceptable level. Thus, a joint virtual machine (VM) resource allocation and +power management framework is critical to the overall cloud computing system. +Moreover, novel solution framework is necessary to address the even higher +dimensions in state and action spaces. In this paper, we propose a novel +hierarchical framework for solving the overall resource allocation and power +management problem in cloud computing systems. The proposed hierarchical +framework comprises a global tier for VM resource allocation to the servers and +a local tier for distributed power management of local servers. The emerging +deep reinforcement learning (DRL) technique, which can deal with complicated +control problems with large state space, is adopted to solve the global tier +problem. Furthermore, an autoencoder and a novel weight sharing structure are +adopted to handle the high-dimensional state space and accelerate the +convergence speed. On the other hand, the local tier of distributed server +power managements comprises an LSTM based workload predictor and a model-free +RL based power manager, operating in a distributed manner. +" +Structured signal recovery from quadratic measurements: Breaking sample complexity barriers via nonconvex optimization," This paper concerns the problem of recovering an unknown but structured +signal $x \in R^n$ from $m$ quadratic measurements of the form +$y_r=||^2$ for $r=1,2,...,m$. We focus on the under-determined setting +where the number of measurements is significantly smaller than the dimension of +the signal ($m<