# Papers

##### Post a paper
 « 2011 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2013 »
1 vote
The generalized correlation approach, which has been successfully used in statistical radio physics to describe non-Gaussian random processes, is proposed to describe stochastic financial processes. The generalized correlation approach has been used to describe a non-Gaussian random walk with independent, identically distributed increments in the general case, and high-order correlations have been investigated. The cumulants of an asymmetrically truncated Levy distribution have been found. The behaviors of asymmetrically truncated Levy flight, as a particular case of a random walk, are considered. It is shown that, in the Levy regime, high-order correlations between values of asymmetrically truncated Levy flight exist. The source of high-order correlations is the non-Gaussianity of the increments: the increment skewness generates threefold correlation, and the increment kurtosis generates fourfold correlation.
1 vote
pdf ps other (79 views, 92 downloads, 0 comments) [show abstract]
We investigate a simple variation of the Generalized Harmonic method for evolving the Einstein equations. A flat space wave equation for metric perturbations is separated from the Ricci tensor, with the rest of the Ricci tensor becoming a source for these wave equations. We demonstrate that this splitting method allows for the accurate simulation of compact objects, with gravitational field strengths less than or equal to those of neutron stars. This method could thus provide a straightforward path for general relativistic effects to be added to astrophysics simulations, such as in core collapse, accretion disks, and extreme mass ratio systems.
1 vote
The number of citations is a widely used metric for evaluating the scientific credit of papers, scientists and journals. However, it so happens that papers with fewer citations from prestigious scientists have a higher influence than papers with more citations. In this paper, we argue that by whom the paper is being cited is of greater significance than merely the number of citations. Accordingly, we propose an interactive model of author–paper bipartite networks as well as an iterative algorithm to obtain better rankings for scientists and their publications. The main advantage of this method is twofold: (i) it is a parameter-free algorithm; (ii) it considers the relationship between the prestige of scientists and the quality of their publications. We conducted real experiments on publications in econophysics, and used this method to evaluate the influence of related scientific journals. The comparison between the rankings by our method and simple citation counts suggests that our method is effective in distinguishing prestige from popularity.
1 vote
pdf ps other (54 views, 81 downloads, 0 comments) [show abstract]
We propose a phase model to study cascade failure in power grids composed of generators and loads. If the power demand is below a critical value, the model system of power grids maintains the standard frequency by feedback control. On the other hand, if the power demand exceeds the critical value, an electric failure occurs via step out (loss of synchronization) or voltage collapse. The two failures are incorporated as two removal rules of generator nodes and load nodes. We perform direct numerical simulation of the phase model on a scale-free network and compare the results with a mean-field approximation.
1 vote
pdf ps other (110 views, 120 downloads, 0 comments) [show abstract]
Quantitative analysis of empirical data from online social networks reveals group dynamics in which emotions are involved (\v{S}uvakov et al). Full understanding of the underlying mechanisms, however, remains a challenging task. Using agent-based computer simulations, in this paper we study dynamics of emotional communications in online social networks. The rules that guide how the agents interact are motivated, and the realistic network structure and some important parameters are inferred from the empirical dataset of \texttt{MySpace} social network. Agent's emotional state is characterized by two variables representing psychological arousal---reactivity to stimuli, and valence---attractiveness or aversiveness, by which common emotions can be defined. Agent's action is triggered by increased arousal. High-resolution dynamics is implemented where each message carrying agent's emotion along the network link is identified and its effect on the recipient agent is considered as continuously aging in time. Our results demonstrate that (i) aggregated group behaviors may arise from individual emotional actions of agents; (ii) collective states characterized by temporal correlations and dominant positive emotions emerge, similar to the empirical system; (iii) nature of the driving signal---rate of user's stepping into online world, has profound effects on building the coherent behaviors, which are observed for users in online social networks. Further, our simulations suggest that spreading patterns differ for the emotions, e.g., "enthusiastic" and "ashamed", which have entirely different emotional content. {\bf {All data used in this study are fully anonymized.}}
1 vote
pdf ps other (94 views, 100 downloads, 0 comments) [show abstract]
Records of time-stamped social interactions between pairs of individuals (e.g., face-to-face conversations, e-mail exchanges, and phone calls) constitute a so-called temporal network. A remarkable difference between temporal networks and conventional static networks is that time-stamped events rather than links are the unit elements generating the collective behavior of nodes. We propose an importance measure for single interaction events. By generalizing the concept of the advance of event proposed by [Kossinets G, Kleinberg J, and Watts D J (2008) Proceeding of the 14th ACM SIGKDD International conference on knowledge discovery and data mining, p 435], we propose that an event is central when it carries new information about others to the two nodes involved in the event. We find that the proposed measure properly quantifies the importance of events in connecting nodes along time-ordered paths. Because of strong heterogeneity in the importance of events present in real data, a small fraction of highly important events is necessary and sufficient to sustain the connectivity of temporal networks. Nevertheless, in contrast to the behavior of scale-free networks against link removal, this property mainly results from bursty activity patterns and not heterogeneous degree distributions.
1 vote
pdf ps other (109 views, 96 downloads, 0 comments) [show abstract]
We recently measured the average distance of users in the Facebook graph, spurring comments in the scientific community as well as in the general press ("Four Degrees of Separation"). A number of interesting criticisms have been made about the meaningfulness, methods and consequences of the experiment we performed. In this paper we want to discuss some methodological aspects that we deem important to underline in the form of answers to the questions we have read in newspapers, magazines, blogs, or heard from colleagues. We indulge in some reflections on the actual meaning of "average distance" and make a number of side observations showing that, yes, 3.74 "degrees of separation" are really few.
The aim of the paper is to derive for the negative correlation function with a time parameter an asymptotic disjunction of the numerical generalized least-squares estimator of an unknown constant mean of random field in fact the correct classic generalized least-squares estimator of an unknown constant mean of the field.
1 vote
pdf other (148 views, 124 downloads, 0 comments) [show abstract]
We derive explicit recursive formulas for Target Close (TC) and Implementation Shortfall (IS) in the Almgren-Chriss framework. We explain how to compute the optimal starting and stopping times for IS and TC, respectively, given a minimum trading size. We also show how to add a minimum participation rate constraint (Percentage of Volume, PVol) for both TC and IS. We also study an alternative set of risk measures for the optimisation of algorithmic trading curves. We assume a self-similar process (e.g. L\'evy process, fractional Brownian motion or fractal process) and define a new risk measure, the $p$-variation, which reduces to the variance if the process is a Brownian motion. We deduce the explicit formula for the TC and IS algorithms under a self-similar process. We show that there is an equivalence between self-similar models and a family of risk measures called $p$-variations: assuming a self-similar process and calibrating empirically the parameter $p$ for the $p$-variation yields the same result as assuming a Brownian motion and using the $p$-variation as risk measure instead of the variance. We also show that $p$ can be seen as a measure of the aggressiveness: $p$ increases if and only if the TC algorithm starts later and executes faster. From the explicit expression of the TC algorithm one can compute the sensitivities of the curve with respect to the parameters up to any order. As an example, we compute the first order sensitivity with respect to both a local and a global surge of volatility. Finally, we show how the parameter $p$ of the $p$-variation can be implied from the optimal starting time of TC, and that under this framework $p$ can be viewed as a measure of the joint impact of market impact (i.e. liquidity) and volatility.
1 vote
pdf other (94 views, 104 downloads, 0 comments) [show abstract]
We propose a framework to study optimal trading policies in a one-tick pro-rata limit order book, as typically arises in short-term interest rate futures contracts. The high-frequency trader has the choice to trade via market orders or limit orders, which are represented respectively by impulse controls and regular controls. We model and discuss the consequences of the two main features of this particular microstructure: first, the limit orders sent by the high frequency trader are only partially executed, and therefore she has no control on the executed quantity. For this purpose, cumulative executed volumes are modelled by compound Poisson processes. Second, the high frequency trader faces the overtrading risk, which is the risk of brutal variations in her inventory. The consequences of this risk are investigated in the context of optimal liquidation. The optimal trading problem is studied by stochastic control and dynamic programming methods, which lead to a characterization of the value function in terms of an integro quasi-variational inequality. We then provide the associated numerical resolution procedure, and convergence of this computational scheme is proved. Next, we examine several situations where we can on one hand simplify the numerical procedure by reducing the number of state variables, and on the other hand focus on specific cases of practical interest. We examine both a market making problem and a best execution problem in the case where the mid-price process is a martingale. We also detail a high frequency trading strategy in the case where a (predictive) directional information on the mid-price is available. Each of the resulting strategies are illustrated by numerical tests.