About Ripjar
Ripjar is a UK based software company that uses data and machine learning technologies to help companies and governments prevent financial crimes and terrorism. For example, our software was helping many financial institutions and corporations comply with the recent addition of sanctions on Russian entities.
Ripjar originally span out from GCHQ and now has 130 staff based in Cheltenham and remotely and are beginning to expand globally. We have two successful, inter-related products; Labyrinth Screening and Labyrinth Intelligence. Labyrinth Screening allows companies to monitor their customers or suppliers for entities that they aren’t allowed to or do not want to do business with (for ethical or environmental reasons). Labyrinth Intelligence empowers organisation to perform deep investigations into varied datasets to find interesting patterns and relationships.
Data infuses everything Ripjar does. We work with a wide variety of datasets of all scales, including an always-growing archive of 8 billion news articles in (nearly!) every language in the world going back over 30 years, sanctions and watchlist data provided by governments, 250 million organisations and ownership data from global corporate registries.
About the Role
Ripjar has several engineering teams that are responsible for the processing infrastructure and many of the analytics that collect, organise, enrich and distribute this data. Central to almost all of Ripjar’s systems is the Data Collection Hub, which captures data from various sources, processes and analyses it, and then forwards it on to multiple end-user applications. The system is developed and maintained by 3 teams of software engineers, data engineers, and data scientists.
We are looking for an individual with a least 2 years industrial or commercial experience in data processing systems to come in and add to this team. Ripjar values engineers who are thoughtful and thorough problem solvers who are able to learn new technologies, ideas and paradigms quickly.
Technology Stack
The specific technical skills you possess aren’t as important to us as the ability to understand complex systems and get to the heart of problems. We do, however, expect you to be fluent in at least one programming language, have at least two years experience working with moderately complex software systems in production and have a curiosity and interest in learning more.
In this role, you will be using python (specifically pyspark) and Node.js for processing data, backed by various Hadoop stack technologies such as HDFS and HBase. MongoDB and Elasticsearch are used for indexing smaller datasets. Airflow & Nifi are used to co-ordinate the processing of data, while Jenkins, Jira, Confluence and Github are used as support tools. We use ansible to manage configuration and deployments. Most developers use Macbooks for development and our servers all run the CentOS flavour of Linux.
If you have any experience in this tech stack, that’s useful, as is a numerate degree, such as Computer Science, but neither are required for the role.
Responsibilities:
Requirements:
The specific technical skills you possess aren’t as important to us as the ability to understand complex systems and get to the heart of problems. We do, however, expect you to be fluent in at least one programming language, have at least two years experience working with moderately complex software systems in production and have a curiosity and interest in learning more.
Salary and Benefits
SmartSource
DraftKings
EPAM Systems
YipitData
NATEK