Loading...

{{notif_text}}

Join the fight against Covid-19 in India. Check the resources we have curated for you here.https://fightcovid.cutshort.io
Remote, Bengaluru (Bangalore)
3 - 10 years
{{::renderSalaryString({min: 500000, max: 2400000, duration: '', currency: 'INR', equity: false})}}

Skills

Data Science
R Programming
Python
Deep Learning
Neural networks
OpenCV
Machine Learning (ML)
Image Processing

Job description

About Accolite Software

undefined

Founded

2007

Type

Products & Services

Size

250+ employees

Stage

View company

Why apply to jobs via CutShort

No long forms
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Discover employers in your network
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Make your network count
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
{{2101133 | number}}
Matches delivered
{{3712187 | number}}
Network size
{{6212 | number}}
Companies hiring

Similar jobs

Data Engineer

Founded 2020
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote only
Experience icon
0 - 3 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Who are we?   We are incubators of high-quality, dedicated software engineering teams for our clients. We work with product organizations to help them scale or modernize their legacy technology solutions. We work with startups to help them operationalize their idea efficiently. Incubyte strives to find people who are passionate about coding, learning, and growing along with us. We work with a limited number of clients at a time on dedicated, long term commitments with an aim of bringing a product mindset into services.   What we are looking for   We’re looking to hire software craftspeople. People who are proud of the way they work and the code they write. People who believe in and are evangelists of extreme programming principles. High quality, motivated and passionate people who make great teams. We heavily believe in being a DevOps organization, where developers own the entire release cycle and thus get to work not only on programming languages but also on infrastructure technologies in the cloud.   What you’ll be doing   First, you will be writing tests. You’ll be writing self-explanatory, clean code. Your code will produce the same, predictable results, over and over again. You’ll be making frequent, small releases. You’ll be working in pairs. You’ll be doing peer code reviews.   You will work in a product team. Building products and rapidly rolling out new features and fixes.   You will be responsible for all aspects of development – from understanding requirements, writing stories, analyzing the technical approach to writing test cases, development, deployment, and fixes. You will own the entire stack from the front end to the back end to the infrastructure and DevOps pipelines. And, most importantly, you’ll be making a pledge that you’ll never stop learning!   Skills you need in order to succeed in this role Most Important: Integrity of character, diligence and the commitment to do your best Technologies: Azure Data Factory MongoDB SSIS/Apache NiFi (Good to have) Python/Java SOAP/REST Web Services Stored Procedures SQL Test Driven Development Experience with: Data warehousing and data lake initiatives on the Azure cloud Cloud DevOps solutions and cloud data and application migration Database concepts and optimization of complex queries Database versioning, backups, restores and migration, and automation of the same Data security and integrity

Job posted by
apply for job
apply for job
Lifi Lawrance picture
Lifi Lawrance
Job posted by
Lifi Lawrance picture
Lifi Lawrance
Apply for job
apply for job

Machine Learning Engineer

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Qrata
{{rendered_skills_map[skill] || skill}}
Location icon
Remote only
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 1400000, max: 2200000, duration: "undefined", currency: "INR", equity: false})}}

6+ years of applied machine learning experience with a focus on natural language processing. Some of our current projects require knowledge of natural languagegeneration.o 3+ years of software engineering experience.o Advanced knowledge of Python, with 2+ years in a production environment.o Experience with practical applications of deep learning.o Experience with agile, test-driven development, continuous integration, and automated testing.o Experience with productionizing machine learning models and integrating into web- services.o Experience with the full software development life cycle, including requirements collection, design, implementation, testing, and operational support.o Excellent verbal and written communication, teamwork, decision making and influencingskills.o Hustle. Thrives in an evolving, fast paced, ambiguous work environment.

Job posted by
apply for job
apply for job
Blessy Fernandes picture
Blessy Fernandes
Job posted by
Blessy Fernandes picture
Blessy Fernandes
Apply for job
apply for job

Data Engineer

Founded 2014
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Bengaluru (Bangalore)
Experience icon
3 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 1000000, duration: "undefined", currency: "INR", equity: false})}}

Basic Qualifications- Need to have a working knowledge of AWS Redshift.- Minimum 1 year of designing and implementing a fully operational production-grade large-scale data solution on Snowflake Data Warehouse.- 3 years of hands-on experience with building productized data ingestion and processing pipelines using Spark, Scala, Python- 2 years of hands-on experience designing and implementing production-grade data warehousing solutions- Expertise and excellent understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies- Excellent presentation and communication skills, both written and verbal- Ability to problem-solve and architect in an environment with unclear requirements

Job posted by
apply for job
apply for job
Vishal Sharma picture
Vishal Sharma
Job posted by
Vishal Sharma picture
Vishal Sharma
Apply for job
apply for job

Principal Architect

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 16 years
Salary icon
Best in industry{{renderSalaryString({min: 5000000, max: 6000000, duration: "undefined", currency: "INR", equity: false})}}

**12+ years of industry experience and minimum 5-6 years of relevant experience. Responsibility: Development of ASR engine using frameworks like DeepSpeech, Kaldi, wav2letter, Pytorch-Kaldi, CMU Sphinx Assist to define technology required for Speech to Text services besides core engine and to design integration of these technologies. Work on improvement of model's accuracy and guide the team with best practices. Lead a team of 3-5 members Desired experience: Good understanding of machine learning(ML) tools. Should be well versed in classical speech processing methodologies like hidden Markov models (HMMs), Gaussian mixture models (GMMs), Artificial neural networks (ANNs), Language modeling, etc. Hands-on experience of current deep learning (DL) techniques like convolutional neural networks (CNNs), recurrent neural networks (RNNs), long-term short-term memory (LSTM), connectionist temporal classification (CTC), etc used for speech processing is essential. The candidate should have hands-on experience with open source tools such as Kaldi, Pytorch-Kaldi. Familiarity with any of the end-to-end ASR tools such as ESPNET or EESEN or Deep Speech Pytorch is desirable. The candidate should have a good understanding of WFSTs as implemented in OpenFST and Kaldi and should be able to modify the WFST decoder as per the application requirements. Experience in techniques used for resolving issues related to accuracy,  noise, confidence scoring etc.  Ability to implement recipes using scripting languages like bash and perl Ability to develop applications using python, c++, Java

Job posted by
apply for job
apply for job
Ankita Ghosh picture
Ankita Ghosh
Job posted by
Ankita Ghosh picture
Ankita Ghosh
Apply for job
apply for job

ML Engineer
ML Engineer
at Qrataat Qrata

Founded 2015
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Qrata
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 2000000, max: 3000000, duration: "undefined", currency: "INR", equity: false})}}

We are building a global content marketplace that brings companies and content creators together to scale up content creation processes across 50+ content verticals and 150+ industries. Over the past 2.5 years, we’ve worked with companies like India Today, Amazon India, Adobe, Swiggy, Dunzo, Businessworld, Paisabazaar, IndiGo Airlines, Apollo Hospitals, Infoedge, Times Group, Digit, BookMyShow, UpGrad, Yulu, YourStory, and 350+ other brands. Our mission is to become the world’s largest content creation and distribution platform for all kinds of content creators and brands.   Our Team   We are a 25+ member company and is scaling up rapidly in both team size and our ambition. If we were to define the kind of people and the culture we have, it would be - a) Individuals with an Extreme Sense of Passion About Work b) Individuals with Strong Customer and Creator Obsession c) Individuals with Extraordinary Hustle, Perseverance & Ambition We are on the lookout for individuals who are always open to going the extra mile and thrive in a fast-paced environment. We are strong believers in building a great, enduring a company that can outlast its builders and create a massive impact on the lives of our employees, creators, and customers alike.   Our Investors   We are fortunate to be backed by some of the industry’s most prolific angel investors - Kunal Bahl and Rohit Bansal (Snapdeal founders), YourStory Media. (Shradha Sharma); Dr. Saurabh Srivastava, Co-founder of IAN and NASSCOM; Slideshare co-founder Amit Ranjan; Indifi's Co-founder and CEO Alok Mittal; Sidharth Rao, Chairman of Dentsu Webchutney; Ritesh Malik, Co-founder and CEO of Innov8; Sanjay Tripathy, former CMO, HDFC Life, and CEO of Agilio Labs; Manan Maheshwari, Co-founder of WYSH; and Hemanshu Jain, Co-founder of Diabeto.Backed by Lightspeed Venture Partners Job Responsibilities:● Design, develop, test, deploy, maintain and improve ML models● Implement novel learning algorithms and recommendation engines● Apply Data Science concepts to solve routine problems of target users● Translates business analysis needs into well-defined machine learning problems, andselecting appropriate models and algorithms● Create an architecture, implement, maintain and monitor various data source pipelinesthat can be used across various different types of data sources● Monitor performance of the architecture and conduct optimization● Produce clean, efficient code based on specifications● Verify and deploy programs and systems● Troubleshoot, debug and upgrade existing applications● Guide junior engineers for productive contribution to the developmentThe ideal candidate must -ML and NLP Engineer● 4 or more years of experience in ML Engineering● Proven experience in NLP● Familiarity with language generative model - GPT3● Ability to write robust code in Python● Familiarity with ML frameworks and libraries● Hands on experience with AWS services like Sagemaker and Personalize● Exposure to state of the art techniques in ML and NLP● Understanding of data structures, data modeling, and software architecture● Outstanding analytical and problem-solving skills● Team player, an ability to work cooperatively with the other engineers.● Ability to make quick decisions in high-pressure environments with limited information.

Job posted by
apply for job
apply for job
Mrunal Kokate picture
Mrunal Kokate
Job posted by
Mrunal Kokate picture
Mrunal Kokate
Apply for job
apply for job

Data Scientist

Founded 1996
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Coimbatore
Experience icon
1 - 7 years
Salary icon
Best in industry{{renderSalaryString({min: 300000, max: 450000, duration: "undefined", currency: "INR", equity: false})}}

We are looking for a data scientist that will help us discover the information hidden in vast amounts of data,and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques,doing statistical analysis, and building high quality prediction systems integrated with our products. Responsibilities:Selecting features, building and optimizing classifiers using machine learning techniquesData mining using state-of-the-art methodsExtending company’s data with third party sources of information when neededEnhancing data collection procedures to include information that is relevant for building analytic systemsProcessing, cleansing, and verifying the integrity of data used for analysisDoing ad-hoc analysis and presenting results in a clear mannerCreating automated anomaly detection systems and constant tracking of its performance Skills and Qualifications:Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc.Experience with common data science toolkits, such as R, Weka, NumPy, MatLab, etc .Excellence in at least one of these is highly desirableGreat communication skillsExperience with data visualisation tools, such as D3.js, GGplot, etc.Proficiency in using query languages such as SQL, Hive, PigExperience with NoSQL databases, such as MongoDB, Cassandra, HBaseGood applied statistics skills, such as distributions, statistical testing, regression, etc.Good scripting and programming skillsData-oriented personality

Job posted by
apply for job
apply for job
Jeeva AR picture
Jeeva AR
Job posted by
Jeeva AR picture
Jeeva AR
Apply for job
apply for job

ETL Engineer - Data Pipeline

Founded 2018
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
Experience icon
2 - 6 years
Salary icon
Best in industry{{renderSalaryString({min: 700000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

Job Responsibilities : - Developing new data pipelines and ETL jobs for processing millions of records and it should be scalable with growth. Pipelines should be optimised to handle both real time data, batch update data and historical data. Establish scalable, efficient, automated processes for complex, large scale data analysis. Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques. Participate in data pipelines health monitoring and performance optimisations as well as quality documentation. Interact with end users/clients and translate business language into technical requirements. Acts independently to expose and resolve problems.Job Requirements :- 2+ years experience working in software development & data pipeline development for enterprise analytics. 2+ years of working with Python with exposure to various warehousing tools In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc. Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must. Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement. Strong verbal and written communication skills with other developers and business client. Knowledge of Logistics and/or Transportation Domain is a plus. Hands-on with traditional databases and ERP systems like Sybase and People-soft.

Job posted by
apply for job
apply for job
PS Dhillon picture
PS Dhillon
Job posted by
PS Dhillon picture
PS Dhillon
Apply for job
apply for job

Machine Learning Engineer

Founded 2012
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Mumbai
Experience icon
1 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 400000, max: 1500000, duration: "undefined", currency: "INR", equity: false})}}

1. The candidate should be passionate about machine learning and deep learning.2. Should understand the importance and know-how of taking the machine-learning-based solution to the consumer.3. Hands-on experience with statistical, machine-learning tools and techniques4. Good exposure to Deep learning libraries like Tensorflow, PyTorch.5. Experience in implementing Deep Learning techniques, Computer Vision and NLP. The candidate should be able to develop the solution from scratch with Github codes exposed.6. Should be able to read research papers and pick ideas to quickly reproduce research in the most comfortable Deep Learning library.7. Should be strong in data structures and algorithms. Should be able to do code complexity analysis/optimization for smooth delivery to production.8. Expert level coding experience in Python.9. Technologies: Backend - Python (Programming Language)10. Should have the ability to think long term solutions, modularity, and reusability of the components.11. Should be able to work in a collaborative way. Should be open to learning from peers as well as constantly bring new ideas to the table.12. Self-driven missile. Open to peer criticism, feedback and should be able to take it positively. Ready to be held accountable for the responsibilities undertaken.

Job posted by
apply for job
apply for job
Anwar Shaikh picture
Anwar Shaikh
Job posted by
Anwar Shaikh picture
Anwar Shaikh
Apply for job
apply for job

Data Engineer

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 1800000, duration: "undefined", currency: "INR", equity: false})}}

Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore

Job posted by
apply for job
apply for job
Sindhu Narayan picture
Sindhu Narayan
Job posted by
Sindhu Narayan picture
Sindhu Narayan
Apply for job
apply for job

Bigdata
Bigdata
at OpexAIat OpexAI

Founded 2017
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via OpexAI
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
0 - 1 years
Salary icon
Best in industry{{renderSalaryString({min: 100000, max: 100000, duration: "undefined", currency: "INR", equity: false})}}

Bigdata, Business intelligence , python, R with their skills

Job posted by
apply for job
apply for job
Jasmine Shaik picture
Jasmine Shaik
Job posted by
Jasmine Shaik picture
Jasmine Shaik
Apply for job
apply for job
Did not find a job you were looking for?
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
Want to apply for this role at Accolite Software?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done