Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more
The job you are looking for has expired or has been deleted. Check out similar jobs below.

Similar jobs

Hadoop Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
24 - 30 lacs/annum

Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
NCR (Delhi | Gurgaon | Noida), Mumbai
Experience icon
2 - 7 years
Experience icon
10 - 30 lacs/annum

About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.

Job posted by
apply for job
apply for job
Pragya Singh picture
Pragya Singh
Job posted by
Pragya Singh picture
Pragya Singh
Apply for job
apply for job

Data Scientist

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai, Hyderabad
Experience icon
3 - 7 years
Experience icon
15 - 45 lacs/annum

Software Engineer – ML at Indix provides an opportunity to design and build systems that crunch large amounts of data everyday What We’re Looking For- 3+ years of experience Ability to propose hypothesis and design experiments in the context of specific problems. Should come from a strong engineering background Good overlap with Indix Data tech stack such as Hadoop, MapReduce, HDFS, Spark, Scalding, Scala/Python/C++ Dedication and diligence in understanding the application domain, collecting/cleaning data and conducting experiments. Creativity in model and algorithm development. An obsession to develop algorithms/models that directly impact business. Master’s/Phd. in Computer Science/Statistics is a plus Job Expectations Experience working in text mining and python libraries like scikit-learn, numpy, etc Collect relevant data from production systems/Use crawling and parsing infrastructure to put together data sets. Survey academic literature and identify potential approaches for exploration. Craft, conduct and analyze experiments to evaluate models/algorithms. Communicate findings and take algorithms/models to production with end to end ownership.

Job posted by
apply for job
apply for job
Sri Devi picture
Sri Devi
Job posted by
Sri Devi picture
Sri Devi
Apply for job
apply for job

Data Scientist/ ML engineer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
10 - 25 lacs/annum

HackerEarth provides enterprise software solutions that help organisations in their innovation management and talent assessment needs. HackerEarth Recruit is a talent assessment platform that helps in efficient technical talent screening allowing organisations to build strong, proficient teams. HackerEarth Sprint is an innovation management software that helps organisations drive innovation through internal and external talent pools, including HackerEarth’s global community of 2M+ developers. Today, HackerEarth serves 750+ organizations, including leading Fortune 500 companies from around the world. General Electric, IBM, Amazon, Apple, Wipro, Walmart Labs and Bosch are some of the brands that trust HackerEarth in helping them drive growth. Job Description We are looking for an ML Engineer that will help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality models integrated with our products. You will be primarily working on recommendation engines, text classification, automated tagging of documents, lexical similarity, semantic similarity and similar problems to start with. Responsibilities Selecting features, building and optimizing classifiers using machine learning techniques Data mining using state-of-the-art methods Extending the company’s data with third-party sources of information when needed Enhancing data collection procedures to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Doing the ad-hoc analysis and presenting results in a clear manner Creating automated anomaly detection systems and constant tracking of its performance Develop custom data models and algorithms to apply to data sets Assess the effectiveness and accuracy of new data sources and data gathering techniques Coordinate with different functional teams to implement models and monitor outcomes. Develop processes and tools to monitor and analyze model performance and data accuracy. Skills and Qualifications 4+ years of experience using statistical computer languages (R, Python, etc.) to manipulate data and draw insights from large data sets. Good applied statistics skills, such as distributions, statistical testing, regression, etc. Proficiency in using query languages such as SQL, Hive, Pig Experience with distributed data/computing tools: MapReduce, Hadoop, Hive, Spark, etc. Experience using web services: Redshift, S3, etc. Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modelling, clustering, decision trees, neural networks, etc. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc. Experience working with and creating data architectures. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.. Excellent written and verbal communication skills for coordinating across teams. A drive to learn and master new technologies and techniques. You should be creative, enthusiastic, and take pride in the work that you produce. Above all, you should love to build and ship solutions that real people will use every day.

Job posted by
apply for job
apply for job
Partha Dewri picture
Partha Dewri
Job posted by
Partha Dewri picture
Partha Dewri
Apply for job
apply for job

Cassandra Admin for IoT Platform

Founded 1989
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 9 years
Experience icon
12 - 25 lacs/annum

As a Cassandra Administrator, you’ll be responsible for the administration and governance of a complex analytics platform that is already changing the way large industrial companies manage their assets. A Cassandra Administrator understands cutting-edge tools and frameworks, and is able to determine what the best tools are for any given task. You will enable and work with our other developers to use cutting-edge technologies in the fields of distributed systems, data ingestion and mapping, and machine learning, to name a few. We also strongly encourage everyone to tinker with existing tools, and to stay up to date and test new technologies—all with the aim of ensuring that our existing systems don’t stagnate or deteriorate. Responsibilities: As a Cassandra Engineer, your responsibilities may include, but are not limited to, the following: • Build a scalable Cassandra Platform designed to serve many different use-cases and requirements • Build a highly scalable framework for ingesting, transforming and enhancing data at web scale • Establish automated build and deployment pipelines Implement machine learning models that enable customers to glean hidden insights about their data • Implementing security and integrating with components such as LDAP, AD, Sentry, Kerberos. • Strong understanding of row level and role based security concepts such as inheritance • Establishing scalability benchmarks for predictable scalability thresholds. Qualifications: • Bachelor's degree in Computer Science or related field • Experience with noSQL data stores: Cassandra, HDFS and/or Elasticsearch • 4+ years of system building experience • 2+ years of programming experience using Cassandra • A passion for DevOps and an appreciation for continuous integration/deployment • A passion for QA and an understanding that testing is not someone else’s responsibility • Experience automating infrastructure and build processes • Outstanding programming and problem solving skills • Strong passion for technology and building great systems • Excellent communication skills and ability to work using Agile methodologies • Ability to work quickly and collaboratively in a fast-paced, entrepreneurial environment • Experience with service-oriented (SOA) and event-driven (EDA) architectures • Experience using big data solutions in an AWS environment • Experience with javascript or associated If you think you would be a good fit for this role, and are interested in joining the best engineering team in IoT, please provide your resume

Job posted by
apply for job
apply for job
Arjun Ravindran picture
Arjun Ravindran
Job posted by
Arjun Ravindran picture
Arjun Ravindran
Apply for job
apply for job

Data Scientist

Founded 2015
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
6 - 10 years
Experience icon
10 - 15 lacs/annum

It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.

Job posted by
apply for job
apply for job
Sangita Deka picture
Sangita Deka
Job posted by
Sangita Deka picture
Sangita Deka
Apply for job
apply for job

Data Scientist

Founded 2013
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
3 - 7 years
Experience icon
5 - 15 lacs/annum

Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.

Job posted by
apply for job
apply for job
Julie K picture
Julie K
Job posted by
Julie K picture
Julie K
Apply for job
apply for job

Freelance Faculty

Founded 2009
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Anywhere, United States, Canada
Experience icon
3 - 10 years
Experience icon
2 - 10 lacs/annum

To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.

Job posted by
apply for job
apply for job
STEVEN JOHN picture
STEVEN JOHN
Job posted by
STEVEN JOHN picture
STEVEN JOHN
Apply for job
apply for job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.