Why you should be interested in this role? Biofourmis is pioneering an entirely new category of digital health, by developing clinically validated software-based therapeutics to provide a better outcome for patients, smarter engagements and tracking tools for clinicians. By combining Machine Learning Technology we are creating a truly unique movement in the health space. Our team works in a cross-functional agile setup consisting of Android developers, backend developers, designers, product managers, researchers, and scrum masters.Biofourmis headquartered in Boston, develops and delivers clinically validated software-based therapeutics to provide cost-effective solutions for payers, accelerated research and drug development for biopharmaceutical companies, advanced tools for clinicians to deliver personalized care, and, ultimately, better outcomes for patients.Our robust digital therapeutics products and pipeline cover multiple therapeutic areas including heart failure, acute coronary syndrome, COPD, and chronic pain.A successful Series B (X Æ A-Xii) round, strategic acquisitions, key commercial multi-year contracts, FDA approvals, new U.S. headquarters and industry recognition were among some of our achievements in 2019. Summary: Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills (Apache Kafka, Big Data technologies, Spark/Pyspark) and also will be able to take oral and written business requirements and develop efficient code to meet set deliverable. 2+ years of hands-on experience in Kafka Expertise on Kafka brokers and connectors. (Must) Good knowledge of distributed pipelines and microservice design Development and support of stream processing functions. Preferably worked on Free and Open source version of Kafka I.e. all components using Apache Maintaining and setting up clusters that are highly available and resilient Exposure to running Kafka as a container or in Kubernetes workflow Working experience in Java / Python is mandatory. Deploy, maintain and production support Kafka clusters. Good knowledge on Linux based operating systems. Experience on AWS/Azure/GCP is a plus. Responsibilities: Participate in the development, enhancement and maintenance of web applications both as an individual contributor and as a team member. Leading in the identification, isolation, resolution and communication of problems within the production environment related to Kafka and data pipelines. Leading other backend developers to apply technical skills related to Apache Kafka, Big Data technologies, Spark/Pyspark. Performs independent functional and technical analysis for major projects supporting several corporate initiatives Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to XBC145 business SME for project definition . Works on multiple platforms and multiple projects concurrently. Performs code and unit testing for complex scope modules, and projects
Skill 1: Azure Data Engineering• Strong Azure and data background with overall 5 to 12 years of experience along with Industry knowledge .• Strong experience in Azure ecosystem such as Azure Data Factory, Azure Databricks, Azure Data Lake and Azure Synapse Analytics and ADLS2• Strong hands on knowledge on Databricks (with Python or PySpark or Spark SQL or SCALA as language)• End to end implementation experience in data analytics solutions (data ingestion, processing, provisioning and visualization) for large scale and complex environments• Strong SQL Datawarehouse Knowledge• Hands on experience developing enterprise solutions using designing and building frameworks, enterprise patterns, database design and development• End to end Cloud solution on Azure (ADLS, Azure Data factory, Azure Analysis Services , Azure Synapse Analytics )• Batch solution and distributed computing using ETL/ELT (Spark SQL Spark Data frame ADF)• DWBI (MSBI, Oracle, Sql Server), Data modelling, performance tuning• Mentor and lead a data engineering teams to design, develop, test and deploy high performance data analytics solutions Roles and Responsibilities• Power BI and Azure Analysis Service are good to haveSkill 1: Azure Data Engineering• Strong Azure and data background with overall 5 to 12 years of experience along with Industry knowledge .• Strong experience in Azure ecosystem such as Azure Data Factory, Azure Databricks, Azure Data Lake and Azure Synapse Analytics and ADLS2• Strong hands on knowledge on Databricks (with Python or PySpark or Spark SQL or SCALA as language)• End to end implementation experience in data analytics solutions (data ingestion, processing, provisioning and visualization) for large scale and complex environments• Strong SQL Datawarehouse Knowledge• Hands on experience developing enterprise solutions using designing and building frameworks, enterprise patterns, database design and development• End to end Cloud solution on Azure (ADLS, Azure Data factory, Azure Analysis Services , Azure Synapse Analytics )• Batch solution and distributed computing using ETL/ELT (Spark SQL Spark Data frame ADF)• DWBI (MSBI, Oracle, Sql Server), Data modelling, performance tuning• Mentor and lead a data engineering teams to design, develop, test and deploy high performance data analytics solutions Roles and Responsibilities• Power BI and Azure Analysis Service are good to have
About the Company, Conviva:Conviva is the leader in streaming media intelligence, powered by its real-time platform. More than 250 industry leaders and brands – including CBS, CCTV, Cirque Du Soleil, DAZN, Disney+, HBO, Hulu, Sky, Sling TV, TED, Univision, and Warner Media – rely on Conviva to maximize their consumer engagement, deliver the quality experiences viewers expect and drive revenue growth. With a global footprint of more than 500 million unique viewers watching 150 billion streams per year across 3 billion applications streaming on devices, Conviva offers streaming providers unmatched scale for continuous video measurement, intelligence and benchmarking across every stream, every screen, every second. Conviva is privately held and headquartered in Silicon Valley, California, with offices around the world. For more information, please visit us at www.conviva.com.What you get to do: Be a thought leader. As one of the senior most technical minds in the India centre, influence our technical evolution journey by pushing the boundaries of possibilities by testing forwarding looking ideas and demonstrating its value. Be a technical leader: Demonstrate pragmatic skills of translating requirements into technical design. Be an influencer. Understand challenges and collaborate across executives and stakeholders in a geographically distributed environment to influence them. Be a technical mentor. Build respect within team. Mentor senior engineers technically andcontribute to the growth of talent in the India centre. Be a customer advocate. Be empathetic to customer and domain by resolving ambiguity efficiently with the customer in mind. Be a transformation agent. Passionately champion engineering best practices and sharing across teams. Be hands-on. Participate regularly in code and design reviews, drive technical prototypes and actively contribute to resolving difficult production issues.What you bring to the role: Thrive in a start-up environment and has a platform mindset. Excellent communicator. Demonstrated ability to succinctly communicate and describe complexvtechnical designs and technology choices both to executives and developers. Expert in Scala coding. JVM based stack is a bonus. Expert in big data technologies like Druid, Spark, Hadoop, Flink (or Akka) & Kafka. Passionate about one or more engineering best practices that influence design, quality of code or developer efficiency. Familiar with building distributed applications using webservices and RESTful APIs. Familiarity in building SaaS platforms on either in-house data centres or public cloud providers.
Required: 5-10 years of experience in full stack software development, 3+ years working with Data processing. Strong computational skills and being able to code fluently in Java. Proficient in Angular, HTML5, CSS3. Good Experience with the AWS stack (S3, Redshift, Lambda, Kenesis). Strong knowledge of SQL and databases architecture. Expertise in Amazon Redshift database is desired. Self-starter and a collaborator having the ability to independently acquire the knowledge required in succeeding the project. Growth mindset. A desire to learn from others and make yourself better every day. Proficiency in Design & Code Reviews. Preferred: Basic experience working with Amazon EMR & Spark. Prior experience in implementing large scale data lake preferably with amazon or google data processing technologies. AWS Certified Architect/Developer. Continued education and research into UI development trends and current design strategy and technology. Good understanding of code versioning tools, such as Git, Bitbucket.
Key Responsibilities : - Leverage the batch computation frameworks and our workflow management platform (Airflow) to assist in building out different data pipelines - Lower the latency and bridge the gap between our production systems and our data warehouse by rethinking and optimizing our core data pipeline jobs - Work with client to create and optimize critical batch processing jobs in Spark - Develop production grade code using Scala/Spark and Python/Spark code on Azure data bricks Skills and Experience : - Strong engineering background and interested in data - Good understanding of data analysis using SQL queries - Strong hold on Python or Scala as a programming language on Azure Databricks. - Experience of developing and maintaining distributed systems built with Azure Databricks or native Apache Spark - Experience of building libraries and tooling that provide abstractions to users for accessing data - Experience in writing and debugging ETL jobs using a distributed data framework (Spark/Hadoop MapReduce etc.) on Azure Databricks - Experience optimizing the end-to-end performance of distributed systems - Ability to recommend and implement ways to improve data reliability, efficiency, and quality.
Position 1 : Azure Data Engineer Exp 3 to 5 years Must have tech stack – Azure Platform + Spark + Scala Nice to have – Power BI, Azure Certification Budget – 14 LPA Notice period – within 30 days only Position 2 : Lead Azure Data Engineer Exp 6 to 8.5 years (can go up to 9.5 if it falls in the budget) Must have tech stack – Azure Platform + Spark + Scala + Power BI Other experience – Senior data engineer (1+years) / Lead Engineer / Architect Nice to have – Azure Certification Budget – 20 LPA Notice period – within 30 days only
REQUIREMENT: Previous experience of working in large scale data engineering 4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory. Previous experience of architecting and designing backend for large scale data processing. Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc. Hands-on and have the ability to contribute a key portion of data engineering backend. Self-inspired and motivated to drive for exceptional results. Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis. Familiarity and experience working with different DB technologies and how to scale them. RESPONSIBILITY: End to end responsibility to come up with data engineering architecture, design, development and then implementation of it. Build data engineering workflow for large scale data processing. Discover opportunities in data acquisition. Bring industry best practices for data engineering workflow. Develop data set processes for data modelling, mining and production. Take additional tech responsibilities for driving an initiative to completion Recommend ways to improve data reliability, efficiency and quality Goes out of their way to reduce complexity. Humble and outgoing - engineering cheerleaders.
Spark / Scala experience should be more than 2 years. Combination with Java & Scala is fine or we are even fine with Big Data Developer with strong Core Java Concepts. - Scala / Spark Developer. Strong proficiency Scala on Spark (Hadoop) - Scala + Java is also preferred Complete SDLC process and Agile Methodology (Scrum) Version control / Git
· Advanced Spark Programming Skills · Advanced Python Skills · Data Engineering ETL and ELT Skills · Expertise on Streaming data · Experience in Hadoop eco system · Basic understanding of Cloud Platforms · Technical Design Skills, Alternative approaches · Hands on expertise on writing UDF’s · Hands on expertise on streaming data ingestion · Be able to independently tune spark scripts · Advanced Debugging skills & Large Volume data handling. · Independently breakdown and plan technical Tasks
3. Key Result Areas · Create and maintain optimal data pipeline, · Assemble large, complex data sets that meet functional / non-functional business requirements. · Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. · Keep our data separated and secure · Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. · Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics. · Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. · Work with data and analytics experts to strive for greater functionality in our data systems 4. Knowledge, Skills and Experience Core Skills: We are looking for a candidate with 7+ years of experience in a Data Engineer role. They should also have experience using the following software/tools: · Experience in developing Big Data applications using Spark, Hive, Sqoop, Kafka, and Map Reduce · Experience with stream-processing systems: Spark-Streaming, Strom etc. · Experience with object-oriented/object function scripting languages: Python, Scala etc · Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data · Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs. Experience with data science and machine learning tools and technologies is a plus · Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. · Experience with Azure cloud services is a plus · Financial Services Knowledge is a plus
Data Engineering role at ThoughtWorks ThoughtWorks India is looking for talented data engineers passionate about building large scale data processing systems to help manage the ever-growing information needs of our clients. Our developers have been contributing code to major organizations and open source projects for over 25 years now. They’ve also been writing books, speaking at conferences, and helping push software development forward -- changing companies and even industries along the way. As Consultants, we work with our clients to ensure we’re delivering the best possible solution. Our Lead Dev plays an important role in leading these projects to success. You will be responsible for - Creating complex data processing pipelines, as part of diverse, high energy teams Designing scalable implementations of the models developed by our Data Scientists Hands-on programming based on TDD, usually in a pair programming environment Deploying data pipelines in production based on Continuous Delivery practices Ideally, you should have - 2-6 years of overall industry experience Minimum of 2 years of experience building and deploying large scale data processing pipelines in a production environment Strong domain modelling and coding experience in Java /Scala / Python. Experience building data pipelines and data centric applications using distributed storage platforms like HDFS, S3, NoSql databases (Hbase, Cassandra, etc) and distributed processing platforms like Hadoop, Spark, Hive, Oozie, Airflow, Kafka etc in a production setting Hands on experience in (at least one or more) MapR, Cloudera, Hortonworks and/or Cloud (AWS EMR, Azure HDInsights, Qubole etc.) Knowledge of software best practices like Test-Driven Development (TDD) and Continuous Integration (CI), Agile development Strong communication skills with the ability to work in a consulting environment is essential And here’s some of the perks of being part of a unique organization like ThoughtWorks: A real commitment to “changing the face of IT” -- our way of thinking about diversity and inclusion. Over the past ten years, we’ve implemented a lot of initiatives to make ThoughtWorks a place that reflects the world around us, and to make this a welcoming home to technologists of all stripes. We’re not perfect, but we’re actively working towards true gender balance for our business and our industry, and you’ll see that diversity reflected on our project teams and in offices. Continuous learning. You’ll be constantly exposed to new languages, frameworks and ideas from your peers and as you work on different projects -- challenging you to stay at the top of your game. Support to grow as a technologist outside of your role at ThoughtWorks. This is why ThoughtWorkers have written over 100 books and can be found speaking at (and, ahem, keynoting) tech conferences all over the world. We love to learn and share knowledge, and you’ll find a community of passionate technologists eager to back your endeavors, whatever they may be. You’ll also receive financial support to attend conferences every year. An organizational commitment to social responsibility. ThoughtWorkers challenge each other to be just a little more thoughtful about the world around us, and we believe in using our profits for good. All around the world, you’ll find ThoughtWorks supporting great causes and organizations in both official and unofficial capacities. If you relish the idea of being part of ThoughtWorks’ Data Practice that extends beyond the work we do for our customers, you may find ThoughtWorks is the right place for you. If you share our passion for technology and want to help change the world with software, we want to hear from you!
Job Description We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources. Responsibilities Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure Skills Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills
Skill set we are looking for.. 3-5 years of professional experience in a data engineering role Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming Proficient understanding of Java / Scala Proficient understanding of distributed computing principles (Hadoop v2, Map Reduce, HDFS) Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala Experience with Spark, Flink, Kafka Streams Experience with NoSQL databases, such as Aerospike, HBase Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O Good understanding of Lambda Architecture, along with its advantages and drawbacks Hands-on experience with AWS Big data technologies, such as EMR, Redshift, ElasticSearch. Would be great if you have.. Knowledge or strong interest in the following areas: Advertising Platforms SCRUM Agile Software Development You role will entail Implementing ETL process and selecting and integrating any Big Data tools and frameworks required to provide requested capabilities Influence key decisions on architecture and implementation of scalable data processing and analytics structure Work with the Data Science team to bring machine learning models into production Build Hadoop MapReduce and Spark processing pipelines using Java, Python Build REST APIs for data access by systems across our infrastructure Focus on performance, throughput, and latency, and drive these throughout our architecture Write test automation, conduct code reviews, and take end-to-end ownership of deployments to production To learn more visit: www.lifesight.io
Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.
Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction When was the last time you thought about rebuilding your smartphone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk. We are quite keen to meet you if: You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people. You like experimenting, taking risks and thinking big. 3 things this position is NOT about: This is NOT just a job; this is a passionate hobby for the right kind. This is NOT a boxed position. You will code, clean, test, build and recruit and you will feel that this is not really ‘work’. This is NOT a position for people who like to spend time on talking more than the time they spend doing. 3 things this position IS about: Attention to detail matters. Roles, titles, the ego does not matter; getting things done matters; getting things done quicker and better matters the most. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars? Roles and Responsibilities Drive and define database design and development of real-time complex products. Strive for excellence in customer experience, technology, methodology, and execution. Define and own end-to-end Architecture from definition phase to go-live phase. Define reusable components/frameworks, common schemas, standards to be used & tools to be used and help bootstrap the engineering team. Performance tuning of application and database and code optimizations. Define database strategy, database design & development standards and SDLC, database customization & extension patterns, database deployment and upgrade methods, database integration patterns, and data governance policies. Architect and develop database schema, indexing strategies, views, and stored procedures for Cloud applications. Assist in defining scope and sizing of work; analyze and derive NFRs, participate in proof of concept development. Contribute to innovation and continuous enhancement of the platform. Define and implement a strategy for data services to be used by Cloud and web-based applications. Improve the performance, availability, and scalability of the physical database, including database access layer, database calls, and SQL statements. Design robust cloud management implementations including orchestration and catalog capabilities. Architect and design distributed data processing solutions using big data technologies - added advantage. Demonstrate thought leadership in cloud computing across multiple channels and become a trusted advisor to decision-makers. Desired Skills Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform. Hands-on experience in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Knowledge of NoSQL stores is a plus). Knowledge of other transactional Database Management Systems/Open database system and NoSQL database (MongoDB, Cassandra, Hbase etc.) is a plus. Good knowledge of data management principles like Data Architecture, Data Governance, Very Large Database Design (VLDB), Distributed Database Design, Data Replication, and High Availability. Must have experience in designing large-scale, highly available, fault-tolerant OLTP data management systems. Solid knowledge of any one of the industry-leading RDBMS like Oracle/SQL Server/DB2/MySQL etc. Expertise in providing data architecture solutions and recommendations that are technology-neutral. Experience in Architecture consulting engagements is a plus. Deep understanding of technical and functional designs for Databases, Data Warehousing, Reporting, and Data Mining areas. Education & Experience Bachelors in Engineering or Computer Science (preferably from a premier School) - Advanced degree in Engineering, Mathematics, Computer or Information Technology. Highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! More so if you have been a techie from 12. 2-5 years of experience in database design & development 0- Years experience of AWS or Google Cloud Platform or Hadoop experience Experience working in a hands-on, fast-paced, creative entrepreneurial environment in a cross-functional capacity.
Intro Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions. What will you doThe data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources. Create and maintain optimal data pipeline architecture and ETL processes Assemble large, complex data sets that meet functional / non-functional business requirements. Develop data pipeline and infrastructure to support real-time decisions Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. What will you need• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse• Experience dealing with large scale Proficiency in writing and debugging complex SQLs Experience working with AWS big data tools• Ability to lead the project and implement best data practises and technology Data Pipelining Strong command in building & optimizing data pipelines, architectures and data sets Strong command on relational SQL & noSQL databases including Postgres Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Big Data: Strong experience in big data tools & applications Tools: Hadoop, Spark, HDFS etc AWS cloud services: EC2, EMR, RDS, Redshift Stream-processing systems: Storm, Spark-Streaming, Flink etc. Message queuing: RabbitMQ, Spark etc Software Development & Debugging Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc Strong hold on data structures & algorithms What would be a bonus Prior experience working in a fast-growth Startup Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
About Us Remember the days when the phone rang and you didn’t know who it was? If it was the company you always dreamt working for? A call from a hospital trying to tell you someone close to you got sick? Or just that stubborn sales guy. Our mission is to make it possible for you to know who’s trying to contact you, and also tell you when not to pick up. We want to remove all uncertainty, making your communication safe and efficient by separating the important stuff from the noise and create trust, no matter if it’s in the beginning of a call, in the middle of a transaction or at the end of a signature. We are building a platform which empowers our users to take control of their own digital identity and making their communication more safe and efficient. We are a diverse organization with over 160 of the best minds coming from different backgrounds, joining hands to ensure our vision of building trust everywhere. Truecaller is one of the fastest growing tech companies in the world. We have 100 million daily active users around the world with the strongest presence in South Asia, Middle East and North Africa. We are backed by some of the most prominent investors in the world such as Sequoia Capital, Atomico, and Kleiner Perkins Caufield & Byers. Your Mission We’re looking for someone who has an interest in system architecture, but a passion for getting things done. You’re smart enough to work at top companies, but you’re picky about finding the right role. You’re experienced, but you also like to learn new things. And you want to work with smart people and have fun building something great. Your challenge will be to build a scalable and reliable system, while facing quickly growing global traffic. This will include producing and developing high-volume, low-latency applications for large systems and coping with the challenges of working in a distributed and highly concurrent environment. You will also be coding new features and have an active role in the definition of the backend architecture; which includes designing microservices and researching about new alternatives and technologies together with the platform team. Your skills As far as your skills, we’d love to hear about: JVM – tuning and optimizing Scala and/or Java Play Framework Non-relational Databases Microservices architecture and patterns DevOps and Continuous Delivery Good English skills, oral and written Some other technologies that we use: Reactive systems Cassandra Apache Kafka Kubernetes Docker Spark Google Cloud Platform We all live and act after our values Get Sh*t done, Be Fearless, Help Each Other and Never Give up and expect you to do it as well. Applying This position is located in Bengaluru, India We only accept applications in English. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, or marital status. Make the right call, send us your application today!