Digital currencies, such as Bitcoin, offer convenience and security to criminals operating in the black marketplace. Some Bitcoin marketplaces, such as Silk Road, even claim anonymity. This claim contradicts the findings in this work, where long term transactional behavior is used to identify and verify account holders. Transaction timestamps and network properties observed over time contribute to this finding. The timestamp of each transaction is the result of many factors: the desire purchase an item, daily schedule and activities, as well as hardware and network latency. Dynamic network properties of the transaction, such as coin flow and the number of edge outputs and inputs, contribute further to reveal account identity. In this paper, we propose a novel methodology for identifying and verifying Bitcoin users based on the observation of Bitcoin transactions over time. The behavior we attempt to quantify roughly occurs in the social band of Newell's time scale. A subset of the Blockchain 230686 is taken, selecting users that initiated between 100 and 1000 unique transactions per month for at least 6 different months. This dataset shows evidence of being nonrandom and nonlinear, thus a dynamical systems approach is taken. Classification and authentication accuracies are obtained under various representations of the monthly Bitcoin samples: outgoing transactions, as well as both outgoing and incoming transactions are considered, along with the timing and dynamic network properties of transaction sequences. The most appropriate representations of monthly Bitcoin samples are proposed. Results show an inherent lack of anonymity by exploiting patterns in long-term transactional behavior.
KEYWORDS: Visualization, Visual process modeling, Data modeling, Control systems, Remote sensing, Clouds, Data processing, Robotics, Cognitive modeling, Sensors
We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment,
including people, and uses the model to process perceptual information and to plan its movements. This paper describes
the structure of this architecture.
The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library
for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The
RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This
Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the
degree of detail required for the task.
As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference
component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and
notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction
unexpectedly.
Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robot's
actions. We report experimental results in indoor environments.
One of the objectives of Cognitive Robotics is to construct robot systems that can be directed to achieve realworld
goals by high-level directions rather than complex, low-level robot programming. Such a system must have the
ability to represent, problem-solve and learn about its environment as well as communicate with other agents. In
previous work, we have proposed ADAPT, a Cognitive Architecture that views perception as top-down and goaloriented
and part of the problem solving process. Our approach is linked to a SOAR-based problem-solving and learning
framework. In this paper, we present an architecture for the perceptive and world modelling components of ADAPT and
report on experimental results using this architecture to predict complex object behaviour.
A novel aspect of our approach is a 'mirror system' that ensures that the modelled background and foreground
objects are synchronized with observations and task-based expectations. This is based on our prior work on comparing
real and synthetic images. We show results for a moving object that collides and rebounds from its environment, hence
showing that this perception-based problem solving approach has the potential to be used to predict complex object motions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.