Loading...

{{notif_text}}

The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will happen on {{next_cs_event.startDate | date: 'd MMMM'}}The next CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, will begin in a few hoursThe CutShort event, {{next_cs_event.name}}, in partnership with UpGrad, is LIVE.Join now!
{{hours_remaining}}:{{minutes_remaining}}:{{seconds_remaining}}Want to save 90% of your recruiting time? Learn how in our next webinar on 22nd March at 3 pmLearn more

Data Science Engineer (SDE I)
Posted by Shobhit Agarwal

apply to this job

Locations

Bengaluru (Bangalore)

Experience

1 - 3 years

Salary

{{1200000 / ('' == 'MONTH' ? 12 : 100000) | number}} - {{2000000 / ('' == 'MONTH' ? 12 : 100000) | number}} {{'' == 'MONTH' ? '/mo' : 'lpa'}}

Skills

Data Structures
Algorithms
Scala
Machine Learning (ML)
Spark
Big Data
Hadoop
Python

Job description

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects (or academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with BigData engineering, hands-on with Apache Spark, Kafka, RDBMS/NoSQL DBs, Big Data Analytics and handling Unix & Production Server. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

About Couture.ai

We are building patent-pending AI platform targeted towards vertical-specific solutions. 
The platform is already licensed by Reliance Jio and few European retailers, to empower real-time tailored experiences for their combined >200 million end users.
 
After integrating our SDK with initial clients, we have seen product view increased by 25% and sales conversion went up to 3 times.

 

Couture.ai is founded by global innovators and entrepreneurs with experience and self-funding of creating global startup success stories in past. The core team consists of some of the best minds in India in Machine learning and Deep learning research.

 

 

Founded

2017

Type

Product

Size

6-50 employees

Stage

Profitable
View company

Similar jobs

Machine Learning Engineer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
1 - 5 years
Experience icon
Best in industry9 - 18 lacs/annum

Cartisan is a start-up building AI-based software products for automotive & EV companies to better engage with their customers and enhance overall experience of owning / using a car. For building SAAS software supporting cloud, mobile and in-car apps with Machine Learning and AI capabilities, we are hiring experienced Machine Learning Engineers at our Bangalore office What’s on offer – Opportunity to lead development of a global tech product, competitive pay and stock options Requirements: 1)Experience in Deep learning using Convolutional Neural Networks 2)Sound knowledge of Object detection,Semantic segmentation,Instance segmentation(Faster-RCNN,Single shot multibox detector(SSD),Mask RCNN,Mobile-net) 3)Classic Image processing techniques using OpenCV 4)Proficiency in Python 5)Should be comfortable building ML models on various deep learning and machine learning libraries using tensorflow,keras,scikit-learn,Numpy,etc 6)Familiarity with Amazon Web service( Elastic Cloud Compute(EC2),Amazon Sagemaker)

Job posted by
apply for job
apply for job
Sharath Murthy picture
Sharath Murthy
Job posted by
Sharath Murthy picture
Sharath Murthy
Apply for job
apply for job

Data Engineer

Founded 2010
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[1 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Pune
Experience icon
1 - 5 years
Experience icon
Best in industry5 - 15 lacs/annum

Job Description In this role you will help us build, improve and maintain our huge data infrastructure where we collect TB's of logs daily. Data driven decisioning is crucial to the success of our customers and this role is central to ensuring we have a cutting edge data infrastructure to do things faster, better, and cheaper! Experience 1 - 3 Years Required Skills -Must be a polyglot with good command over Java, Scala and a scripting language -A non trivial project experience in distributed computing frameworks like Apache Spark/Hadoop/Pig/Kafka/Storm with sound knowledge of their internals -Expert knowledge of relational databases like MYSQL, and in-memory data stores like Redis -Regular participation in coding/hacking contests like Top-Coder, Code-Jam and Hacker-Cup is a huge plus Pre requisites -Strong analytical skills and solid foundation in Computer Science fundamentals specially in -DataStructures/Algorithms, Object Oriented principles, Operating Systems, Computer Networks -Ability and willingness to take ownership and work under minimum supervision, independently or as a part of a team -Passion for innovation and "Never Say Die" attitude -Strong verbal and written communication skills Education BTech/M.Tech/MS/Dual in Computer Science with above average academic credentials

Job posted by
apply for job
apply for job
Sachin Bhatevara picture
Sachin Bhatevara
Job posted by
Sachin Bhatevara picture
Sachin Bhatevara
Apply for job
apply for job

Hadoop Lead Engineers

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
7 - 9 years
Experience icon
Best in industry27 - 34 lacs/annum

Position Description Assists in providing guidance to small groups of two to three engineers, including offshore associates, for assigned Engineering projects Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Generate weekly, monthly and yearly report using JIRA and Open source tools and provide updates to leadership teams. Proactively identify issues, identify root cause for the critical issues. Work with cross functional teams, Setup KT sessions and mentor the team members. Co-ordinate with Sunnyvale and Bentonville teams. Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 8+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Hadoop Developer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
4 - 7 years
Experience icon
Best in industry23 - 30 lacs/annum

Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment

Job posted by
apply for job
apply for job
Sampreetha Pai picture
Sampreetha Pai
Job posted by
Sampreetha Pai picture
Sampreetha Pai
Apply for job
apply for job

Sr. Big Data Engineer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 7 years
Experience icon
Best in industry8 - 18 lacs/annum

Roles and Responsibilities: ●        Inclined towards working in a start-up like environment. ●        Comfort with frequent, incremental code testing and deployment, Data management skills ●        Design and Build robust and scalable data engineering solutions for structured and unstructured data for delivering business insights, reporting and analytics. ●        Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall system performance. ●        Build robust API’s that powers our delivery points (Dashboards, Visualizations and other integrations). Skills and Requirements: ●        Good communication and collaboration skills with 3-7 years of experience. ●        Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities. ●        Comfort with frequent, incremental code testing and deployment, Data management skills ●        Good understanding of RDBMS ●        Experience in building Data pipelines and processing large datasets. ●        Knowledge of building crawlers and data mining is a plus. ●        Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra (data stores) would be a plus.

Job posted by
apply for job
apply for job
Sadananda Vaidya picture
Sadananda Vaidya
Job posted by
Sadananda Vaidya picture
Sadananda Vaidya
Apply for job
apply for job

Intern - Big Data Engineering

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bangalore
Experience icon
0 - 1 years
Experience icon
Best in industry2 - 4 lacs/annum

About Us DataWeave is a Data Platform which aggregates publicly available data from disparate sources and makes it available in the right format to enable companies take strategic decisions using trans-firewall Analytics. It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Read more on Become a DataWeaver Skills and Requirements: ● Good communication and collaboration skills. ● Ability to code and script with strong grasp of CS fundamentals, excellent problem-solving abilities. ● Comfortable with at least one coding language, Python would be a plus. ● Good understanding of RDBMS ● Experience in building Data pipelines and processing large datasets is a plus. ● Knowledge of building crawlers is a plus. ● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra would be a plus. · Growth at DataWeave ● Based on the performance, permanent employment will be offered between 3-6 months in to the Internship. ● You have the opportunity to work in many different areas and explore wide variety of tools to figure out what really excites you. ● Competitive Salary Packages.

Job posted by
apply for job
apply for job
Sadananda Vaidya picture
Sadananda Vaidya
Job posted by
Sadananda Vaidya picture
Sadananda Vaidya
Apply for job
apply for job

Lead Data Engineer (SDE III)

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 8 years
Experience icon
Best in industry25 - 55 lacs/annum

Couture.ai is building a patent-pending AI platform targeted towards vertical-specific solutions. The platform is already licensed by Reliance Jio and few European retailers, to empower real-time experiences for their combined >200 million end users. For this role, credible display of innovation in past projects is a must. We are looking for hands-on leaders in data engineering with the 5-11 year of research/large-scale production implementation experience with: - Proven expertise in Spark, Kafka, and Hadoop ecosystem. - Rock-solid algorithmic capabilities. - Production deployments for massively large-scale systems, real-time personalization, big data analytics and semantic search. - Expertise in Containerization (Docker, Kubernetes) and Cloud Infra, preferably OpenStack. - Experience with Spark ML, Tensorflow (& TF Serving), MXNet, Scala, Python, NoSQL DBs, Kubernetes, ElasticSearch/Solr in production. Tier-1 college (BE from IITs, BITS-Pilani, IIITs, top NITs, DTU, NSIT or MS in Stanford, UC, MIT, CMU, UW–Madison, ETH, top global schools) or exceptionally bright work history is a must. Let us know if this interests you to explore the profile further.

Job posted by
apply for job
apply for job
Shobhit Agarwal picture
Shobhit Agarwal
Job posted by
Shobhit Agarwal picture
Shobhit Agarwal
Apply for job
apply for job

Software Developer

via IQVIA
Founded 1969
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[3 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore), Kochi (Cochin)
Experience icon
2 - 7 years
Experience icon
Best in industry5 - 25 lacs/annum

Job Skill Requirements: • 4+ years of experience building and managing complex products/solutions • 2+ experience in DW/ELT/ETL technologies-Nice to have • 3+ years of hands on development experience using Big Data Technologies like: Hadoop, SPARK • 3+ years of hands on development experience using Big Data eco system components like: Hive, Impala,HBase, Sqoop, Oozie etc… • Proficient level programming in Scala. • Good to have hands on experience building webservices in Python/Scala stack. • Good to have experience developing Restful web services • Knowledge of web technologies and protocols (NoSQL/JSON/REST/JMS)

Job posted by
apply for job
apply for job
Ambili Sasidharan picture
Ambili Sasidharan
Job posted by
Ambili Sasidharan picture
Ambili Sasidharan
Apply for job
apply for job

Data Engineer

Founded 2015
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
1 - 5 years
Experience icon
Best in industry7 - 12 lacs/annum

JOB DESCRIPTION: We are looking for a Data Engineer with a solid background in scalable systems to work with our engineering team to improve and optimize our platform. You will have significant input into the team’s architectural approach and execution. We are looking for a hands-on programmer who enjoys designing and optimizing data pipelines for large-scale data. This is NOT a "data scientist" role, so please don't apply if you're looking for that. RESPONSIBILITIES: 1. Build, maintain and test, performant, scalable data pipelines 2. Work with data scientists and application developers to implement scalable pipelines for data ingest, processing, machine learning and visualization 3. Building interfaces for ingest across various data stores MUST-HAVE: 1. A track record of building and deploying data pipelines as a part of work or side projects 2. Ability to work with RDBMS, MySQL or Postgres 3. Ability to deploy over cloud infrastructure, at least AWS 4. Demonstrated ability and hunger to learn GOOD-TO-HAVE: 1. Computer Science degree 2. Expertise in at least one of: Python, Java, Scala 3. Expertise and experience in deploying solutions based on Spark and Kafka 4. Knowledge of container systems like Docker or Kubernetes 5. Experience with NoSQL / graph databases: 6. Knowledge of Machine Learning Kindly apply only if you are skilled in building data pipelines.

Job posted by
apply for job
apply for job
Zeimona Dsouza picture
Zeimona Dsouza
Job posted by
Zeimona Dsouza picture
Zeimona Dsouza
Apply for job
apply for job

Data ETL Engineer

Founded 2013
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[2 - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai
Experience icon
1 - 3 years
Experience icon
Best in industry5 - 12 lacs/annum

Responsibilities: Design and develop ETL Framework and Data Pipelines in Python 3. Orchestrate complex data flows from various data sources (like RDBMS, REST API, etc) to the data warehouse and vice versa. Develop app modules (in Django) for enhanced ETL monitoring. Device technical strategies for making data seamlessly available to BI and Data Sciences teams. Collaborate with engineering, marketing, sales, and finance teams across the organization and help Chargebee develop complete data solutions. Serve as a subject-matter expert for available data elements and analytic capabilities. Qualification: Expert programming skills with the ability to write clean and well-designed code. Expertise in Python, with knowledge of at least one Python web framework. Strong SQL Knowledge, and high proficiency in writing advanced SQLs. Hands on experience in modeling relational databases. Experience integrating with third-party platforms is an added advantage. Genuine curiosity, proven problem-solving ability, and a passion for programming and data.

Job posted by
apply for job
apply for job
Vinothini Sundaram picture
Vinothini Sundaram
Job posted by
Vinothini Sundaram picture
Vinothini Sundaram
Apply for job
apply for job
Want to apply for this role at Couture.ai?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No 3rd party recruiters. No spam.