Please note: This is an on-site position in Copenhagen, Denmark. Applications for remote work will not be considered.

Are you a skilled young soul with an affinity (and a passion, duh!) for machine learning, data science, jobs and coffee? Let’s chat.

At Relink we are matching people to jobs and companies with new hires.

This is similar to a recommendation engine, but you can not just slap some collaborative filtering on it and call it a day. Using historic public and private data, we apply unsupervised learning and graph modeling to find latent properties of jobs and people, allowing our matching capabilities to go beyond just the CV.

We are looking for a Data Science Intern to work with our team of data scientist and engineers on endeavors that will change how companies hire and people gets hired.

As part of Team Science at Relink your work is a crucial part of our product development cycle. We try things out, figure out if they work or why they do not. Not everything we do succeeds, but when we do succeed, we take these projects all the way to production – with the help of our data and backend engineers. An honest feedback loop is at the core of what we do and who we are.

It is important that you understand the practicalities of the real world and that you can work, or learn to work with, the newest of technologies and methods within the machine learning and deep learning space. At the same time, you ought to know how your work is applicable in production. As a data science intern, you will be working with Natural Language Processing (NLP) and Machine Learning (ML), and a solid software development discipline is needed.

For this position, you need to be based in Copenhagen, work on site and have a valid working permit in Denmark. As much as we would like to fly you over here, we do not offer relocation for this position.

We want you to:

  • Think and develop at scale in order to handle datasets of hundreds of millions of profiles.

  • Optimise not by shaving off percentage points, but by rethinking the problem.

  • Work closely with our data engineers and developers to ensure elegant, modular integrations in production.

  • Follow the NLP and AI / ML research.

  • Assist with experimental design.

  • Measure everything.

Experience with the following technologies is a major plus:

  • NLP and preferably knowledge beyond just NLTK / standard tools, e.g., deep learning techniques and frameworks.

  • Scala and/or functional programming

  • Python and/or R

  • Spark and Spark Streaming

  • TDD and BDD

  • Containerization (docker, etc.)

  • Databricks