Cutshort logo
Digital Aristotle logo
Machine Learning Specialist - Deep Learning/NLP
Machine Learning Specialist - Deep Learning/NLP
Digital Aristotle's logo

Machine Learning Specialist - Deep Learning/NLP

Digital Aristotle's profile picture
Posted by Digital Aristotle
3 - 6 yrs
₹5L - ₹15L / yr
Bengaluru (Bangalore)
Skills
skill iconDeep Learning
Natural Language Processing (NLP)
skill iconMachine Learning (ML)
skill iconPython

JD : ML/NLP Tech Lead

- We are looking to hire an ML/NLP Tech lead who can own products for a technology perspective and manage a team of up to 10 members. You will play a pivotal role in re-engineering our products, transformation, and scaling of AssessEd

WHAT ARE WE BUILDING :

- A revolutionary way of providing continuous assessments of a child's skill and learning, pointing the way to the child's potential in the future. This as opposed to the traditional one-time, dipstick methodology of a test that hurriedly bundles the child into a slot, that in-turn - declares- the child to be fit for a career in a specific area or a particular set of courses that would perhaps get him somewhere. At the core of our system is a lot of data - both structured and unstructured. 

 

- We have books and questions and web resources and student reports that drive all our machine learning algorithms. Our goal is to not only figure out how a child is coping but to also figure out how to help him by presenting relevant information and questions to him in topics that he is struggling to learn.

Required Skill sets :

- Wisdom to know when to hustle and when to be calm and dig deep. Strong can do mentality, who is joining us to build on a vision, not to do a job.

- A deep hunger to learn, understand, and apply your knowledge to create technology.

- Ability and Experience tackling hard Natural Language Processing problems, to separate wheat from the chaff, knowledge of mathematical tools to succinctly describe the ideas to implement them in code.

- Very Good understanding of Natural Language Processing and Machine Learning with projects to back the same.

- Strong fundamentals in Linear Algebra, Probability and Random Variables, and Algorithms.

- Strong Systems experience in Distributed Systems Pipeline: Hadoop, Spark, etc.

- Good knowledge of at least one prototyping/scripting language: Python, MATLAB/Octave or R.

- Good understanding of Algorithms and Data Structures.

- Strong programming experience in C++/Java/Lisp/Haskell.

- Good written and verbal communication.

Desired Skill sets :

- Passion for well-engineered product and you are - ticked off- when something engineered is off and you want to get your hands dirty and fix it.

- 3+ yrs of research experience in Machine Learning, Deep Learning and NLP

- Top tier peer-reviewed research publication in areas like Algorithms, Computer Vision/Image Processing, Machine Learning or Optimization (CVPR, ICCV, ICML, NIPS, EMNLP, ACL, SODA, FOCS etc)

- Open Source Contribution (include the link to your projects, GitHub etc.)

- Knowledge of functional programming.

- International level participation in ACM ICPC, IOI, TopCoder, etc

 

- International level participation in Physics or Math Olympiad

- Intellectual curiosity about advanced math topics like Theoretical Computer Science, Abstract Algebra, Topology, Differential Geometry, Category Theory, etc.

What can you expect :

- Opportunity to work on the interesting and hard research problem, to see the real application of state-of-the-art research into practice.

- Opportunity to work on important problems with big social impact: Massive, and direct impact of the work you do on the lives of students.

- An intellectually invigorating, phenomenal work environment, with massive ownership and growth opportunities.

- Learn effective engineering habits required to build/deploy large production-ready ML applications.

- Ability to do quick iterations and deployments.

- We would be excited to see you publish papers (though certain restrictions do apply).

Website : http://Digitalaristotle.ai


Work Location: - Bangalore

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Digital Aristotle

Founded :
2015
Type
Size
Stage :
Bootstrapped
About
N/A
Connect with the team
Profile picture
Digital Aristotle
Profile picture
Anushree Nandy
Company social profiles
N/A

Similar jobs

6sense
at 6sense
15 recruiters
Romesh Rawat
Posted by Romesh Rawat
Remote only
5 - 8 yrs
₹30L - ₹45L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more

About Slintel (a 6sense company) :

Slintel, a 6sense company,  the leader in capturing technographics-powered buying intent, helps companies uncover the 3% of active buyers in their target market. Slintel evaluates over 100 billion data points and analyzes factors such as buyer journeys, technology adoption patterns, and other digital footprints to deliver market & sales intelligence.

Slintel's customers have access to the buying patterns and contact information of more than 17 million companies and 250 million decision makers across the world.

Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are funded by top tier VCs, and going after a billion dollar opportunity. At Slintel, we are building a sales development automation platform that can significantly improve outcomes for sales teams, while reducing the number of hours spent on research and outreach.

We are a big data company and perform deep analysis on technology buying patterns, buyer pain points to understand where buyers are in their journey. Over 100 billion data points are analyzed every week to derive recommendations on where companies should focus their marketing and sales efforts on. Third party intent signals are then clubbed with first party data from CRMs to derive meaningful recommendations on whom to target on any given day.

6sense is headquartered in San Francisco, CA and has 8 office locations across 4 countries.

6sense, an account engagement platform, secured $200 million in a Series E funding round, bringing its total valuation to $5.2 billion 10 months after its $125 million Series D round. The investment was co-led by Blue Owl and MSD Partners, among other new and existing investors.

Linkedin (Slintel) : https://www.linkedin.com/company/slintel/">https://www.linkedin.com/company/slintel/

Industry : Software Development

Company size : 51-200 employees (189 on LinkedIn)

Headquarters : Mountain View, California

Founded : 2016

Specialties : Technographics, lead intelligence, Sales Intelligence, Company Data, and Lead Data.

Website (Slintel) : https://www.slintel.com/slintel">https://www.slintel.com/slintel

Linkedin (6sense) : https://www.linkedin.com/company/6sense/">https://www.linkedin.com/company/6sense/

Industry : Software Development

Company size : 501-1,000 employees (937 on LinkedIn)

Headquarters : San Francisco, California

Founded : 2013

Specialties : Predictive intelligence, Predictive marketing, B2B marketing, and Predictive sales

Website (6sense) : https://6sense.com/">https://6sense.com/

Acquisition News : 

https://inc42.com/buzz/us-based-based-6sense-acquires-b2b-buyer-intelligence-startup-slintel/ 

Funding Details & News :

Slintel funding : https://www.crunchbase.com/organization/slintel">https://www.crunchbase.com/organization/slintel

6sense funding : https://www.crunchbase.com/organization/6sense">https://www.crunchbase.com/organization/6sense

https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round">https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round

https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round">https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round

https://xipometer.com/en/company/6sense">https://xipometer.com/en/company/6sense

Slintel & 6sense Customers :

https://www.featuredcustomers.com/vendor/slintel/customers

https://www.featuredcustomers.com/vendor/6sense/customers">https://www.featuredcustomers.com/vendor/6sense/customers

About the job

Responsibilities

  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, and AWS technology
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems

Requirements

  • 3+ years of experience in a Data Engineer role
  • Proficiency in Linux
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena
  • Must have experience with Python/ Scala
  • Must have experience with Big Data technologies like Apache Spark
  • Must have experience with Apache Airflow
  • Experience with data pipeline and ETL tools like AWS Glue
  • Experience working with AWS cloud services: EC2 S3 RDS, Redshift and other Data solutions eg. Databricks, Snowflake

 

Desired Skills and Experience

Python, SQL, Scala, Spark, ETL

 

Read more
Kloud9 Technologies
Prem Kumar
Posted by Prem Kumar
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹24L / yr
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
skill iconJava
skill iconR Programming

About Kloud9:

 

Kloud9 exists with the sole purpose of providing cloud expertise to the retail industry. Our team of cloud architects, engineers and developers help retailers launch a successful cloud initiative so you can quickly realise the benefits of cloud technology. Our standardised, proven cloud adoption methodologies reduce the cloud adoption time and effort so you can directly benefit from lower migration costs.

 

Kloud9 was founded with the vision of bridging the gap between E-commerce and cloud. The E-commerce of any industry is limiting and poses a huge challenge in terms of the finances spent on physical data structures.

 

At Kloud9, we know migrating to the cloud is the single most significant technology shift your company faces today. We are your trusted advisors in transformation and are determined to build a deep partnership along the way. Our cloud and retail experts will ease your transition to the cloud.

 

Our sole focus is to provide cloud expertise to retail industry giving our clients the empowerment that will take their business to the next level. Our team of proficient architects, engineers and developers have been designing, building and implementing solutions for retailers for an average of more than 20 years.

 

We are a cloud vendor that is both platform and technology independent. Our vendor independence not just provides us with a unique perspective into the cloud market but also ensures that we deliver the cloud solutions available that best meet our clients' requirements.

 

Responsibilities:

●       Studying, transforming, and converting data science prototypes

●       Deploying models to production

●       Training and retraining models as needed

●       Analyzing the ML algorithms that could be used to solve a given problem and ranking them by their respective scores

●       Analyzing the errors of the model and designing strategies to overcome them

●       Identifying differences in data distribution that could affect model performance in real-world situations

●       Performing statistical analysis and using results to improve models

●       Supervising the data acquisition process if more data is needed

●       Defining data augmentation pipelines

●       Defining the pre-processing or feature engineering to be done on a given dataset

●       To extend and enrich existing ML frameworks and libraries

●       Understanding when the findings can be applied to business decisions

●       Documenting machine learning processes

 

Basic requirements: 

 

●       4+ years of IT experience in which at least 2+ years of relevant experience primarily in converting data science prototypes and deploying models to production

●       Proficiency with Python and machine learning libraries such as scikit-learn, matplotlib, seaborn and pandas

●       Knowledge of Big Data frameworks like Hadoop, Spark, Pig, Hive, Flume, etc

●       Experience in working with ML frameworks like TensorFlow, Keras, OpenCV

●       Strong written and verbal communications

●       Excellent interpersonal and collaboration skills.

●       Expertise in visualizing and manipulating big datasets

●       Familiarity with Linux

●       Ability to select hardware to run an ML model with the required latency

●       Robust data modelling and data architecture skills.

●       Advanced degree in Computer Science/Math/Statistics or a related discipline.

●       Advanced Math and Statistics skills (linear algebra, calculus, Bayesian statistics, mean, median, variance, etc.)

 

Nice to have

●       Familiarity with Java, and R code writing.

●       Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world

●       Verifying data quality, and/or ensuring it via data cleaning

●       Supervising the data acquisition process if more data is needed

●       Finding available datasets online that could be used for training

 

Why Explore a Career at Kloud9:

 

With job opportunities in prime locations of US, London, Poland and Bengaluru, we help build your career paths in cutting edge technologies of AI, Machine Learning and Data Science. Be part of an inclusive and diverse workforce that's changing the face of retail technology with their creativity and innovative solutions. Our vested interest in our employees translates to deliver the best products and solutions to our customers.

Read more
Dotball Interactive Private Limited
Veena K V
Posted by Veena K V
Bengaluru (Bangalore)
1 - 2 yrs
₹3L - ₹6L / yr
skill iconPython
skill iconData Analytics
SQL server
- Excellent skill in SQL and SQL tools - Fantasy Gamer - Excited to dive deep into data and derive insights - Decent communication and writing skills - Interested in Automated solutions - Startup and Growth Mindset - Analytical Data mining thinking - Python - Good programmer
Read more
Codejudge
at Codejudge
2 recruiters
Vaishnavi M
Posted by Vaishnavi M
Bengaluru (Bangalore)
3 - 7 yrs
₹20L - ₹25L / yr
SQL
skill iconPython
Data architecture
Data mining
skill iconData Analytics
Job description
  • The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.
  • Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
  • Assess the effectiveness and accuracy of new data sources and data gathering techniques.
  • Develop custom data models and algorithms to apply to data sets.
  • Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
  • Develop company A/B testing framework and test model quality.
  • Develop processes and tools to monitor and analyze model performance and data accuracy.

Roles & Responsibilities

  • Experience using statistical languages (R, Python, SQL, etc.) to manipulate data and draw insights from large data sets.
  • Experience working with and creating data architectures.
  • Looking for someone with 3-7 years of experience manipulating data sets and building statistical models
  • Has a Bachelor's, Master's in Computer Science or another quantitative field
  • Knowledge and experience in statistical and data mining techniques :
  • GLM/Regression, Random Forest, Boosting, Trees, text mining,social network analysis, etc.
  • Experience querying databases and using statistical computer languages :R, Python, SQL, etc.
  • Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees,neural networks, etc.
  • Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc.
  • Experience visualizing/presenting data for stakeholders using: Periscope, Business Objects, D3, ggplot, etc.
Read more
Bengaluru (Bangalore)
1 - 4 yrs
₹10L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+7 more

Work closely with Product Managers to drive product improvements through data driven decisions.

Conduct analysis to determine new project pilot settings, new features, user behaviour, and in-app behaviour.

Present insights and recommendations to leadership using high quality visualizations and concise messaging.

Own the implementation of data collection and tracking, and co-ordinate with engineering and product team.

Create and maintain dashboards for product and business teams.

Requirements

1+ years’ experience in analytics. Experience as Product analyst will be added advantage.

Technical skills: SQL, Advanced Excel

Good to have: R/Python, Dashboarding experience

Ability to translate structured and unstructured problems into analytical framework

Excellent analytical skills

Good communication & interpersonal skills

Ability to work in a fast-paced start-up environment, learn on the job and get things done.

Read more
Bengaluru (Bangalore)
4 - 8 yrs
₹20L - ₹25L / yr
Relational Database (RDBMS)
skill iconPostgreSQL
MySQL
skill iconPython
Spark
+6 more

What is the role?

You will be responsible for developing and designing front-end web architecture, ensuring the responsiveness of applications, and working alongside graphic designers for web design features, among other duties. You will be responsible for the functional/technical track of the project

Key Responsibilities

  • Develop and automate large-scale, high-performance data processing systems (batch and/or streaming).
  • Build high-quality software engineering practices towards building data infrastructure and pipelines at scale.
  • Lead data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
  • Optimize performance to meet high throughput and scale

What are we looking for?

  • 4+ years of relevant industry experience.
  • Working with data at the terabyte scale.
  • Experience designing, building and operating robust distributed systems.
  • Experience designing and deploying high throughput and low latency systems with reliable monitoring and logging practices.
  • Building and leading teams.
  • Working knowledge of relational databases like Postgresql/MySQL.
  • Experience with Python / Spark / Kafka / Celery
  • Experience working with OLTP and OLAP systems
  • Excellent communication skills, both written and verbal.
  • Experience working in cloud e.g., AWS, Azure or GCP

Whom will you work with?

You will work with a top-notch tech team, working closely with the architect and engineering head.

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at this company

We are

We  strive to make selling fun with our SaaS incentive gamification product.  Company  is the #1 gamification software that automates and digitizes Sales Contests and Commission Programs. With game-like elements, rewards, recognitions, and complete access to relevant information, Company turbocharges an entire salesforce. Company also empowers Sales Managers with easy-to-publish game templates, leaderboards, and analytics to help accelerate performances and sustain growth.

We are a fun and high-energy team, with people from diverse backgrounds - united under the passion of getting things done. Rest assured that you shall get complete autonomy in your tasks and ample opportunities to develop your strengths.

Way forward

If you find this role exciting and want to join us in Bangalore, India, then apply by clicking below. Provide your details and upload your resume. All received resumes will be screened, shortlisted candidates will be requested to join for a discussion and on mutual alignment and agreement, we will proceed with hiring.

 
Read more
Marktine
at Marktine
1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 10 yrs
₹5L - ₹15L / yr
Big Data
ETL
PySpark
SSIS
Microsoft Windows Azure
+4 more

Must Have Skills:

- Solid Knowledge on DWH, ETL and Big Data Concepts

- Excellent SQL Skills (With knowledge of SQL Analytics Functions)

- Working Experience on any ETL tool i.e. SSIS / Informatica

- Working Experience on any Azure or AWS Big Data Tools.

- Experience on Implementing Data Jobs (Batch / Real time Streaming)

- Excellent written and verbal communication skills in English, Self-motivated with strong sense of ownership and Ready to learn new tools and technologies

Preferred Skills:

- Experience on Py-Spark / Spark SQL

- AWS Data Tools (AWS Glue, AWS Athena)

- Azure Data Tools (Azure Databricks, Azure Data Factory)

Other Skills:

- Knowledge about Azure Blob, Azure File Storage, AWS S3, Elastic Search / Redis Search

- Knowledge on domain/function (across pricing, promotions and assortment).

- Implementation Experience on Schema and Data Validator framework (Python / Java / SQL),

- Knowledge on DQS and MDM.

Key Responsibilities:

- Independently work on ETL / DWH / Big data Projects

- Gather and process raw data at scale.

- Design and develop data applications using selected tools and frameworks as required and requested.

- Read, extract, transform, stage and load data to selected tools and frameworks as required and requested.

- Perform tasks such as writing scripts, web scraping, calling APIs, write SQL queries, etc.

- Work closely with the engineering team to integrate your work into our production systems.

- Process unstructured data into a form suitable for analysis.

- Analyse processed data.

- Support business decisions with ad hoc analysis as needed.

- Monitoring data performance and modifying infrastructure as needed.

Responsibility: Smart Resource, having excellent communication skills

 

 
Read more
Mobile Programming LLC
at Mobile Programming LLC
1 video
34 recruiters
Apurva kalsotra
Posted by Apurva kalsotra
Mohali, Gurugram, Pune, Bengaluru (Bangalore), Hyderabad, Chennai
3 - 8 yrs
₹2L - ₹9L / yr
Data engineering
Data engineer
Spark
Apache Spark
Apache Kafka
+13 more

Responsibilities for Data Engineer

  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

Qualifications for Data Engineer

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:

  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Aptus Data LAbs
at Aptus Data LAbs
1 recruiter
Merlin Metilda
Posted by Merlin Metilda
Bengaluru (Bangalore)
5 - 10 yrs
₹6L - ₹15L / yr
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
+5 more

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Read more
LimeTray
at LimeTray
5 recruiters
tanika monga
Posted by tanika monga
NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹15L - ₹18L / yr
skill iconMachine Learning (ML)
skill iconPython
Cassandra
MySQL
Apache Kafka
+2 more
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos