Cutshort logo
LimeTray logo
Senior Software Engineer
Senior Software Engineer
LimeTray's logo

Senior Software Engineer

tanika monga's profile picture
Posted by tanika monga
4 - 6 yrs
₹15L - ₹18L / yr
Delhi, Gurugram, Noida
Skills
skill iconMachine Learning (ML)
skill iconPython
Cassandra
MySQL
Apache Kafka
RabbitMQ
skill iconJava
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About LimeTray

Founded :
2013
Type
Size
Stage :
Profitable
About
Our story starts with the realization that restaurants making good food, 6 out of 10 times, drop the shutters in front of our very eyes. The thought of letting good food go to waste was heartbreaking. So we at LimeTray, decided to help restaurants by designing and implementing products that help bridge the disconnect between the restaurants & the consumer. In a world that demands convenience, we help restaurants in getting online and provide them tools to engage with customers, increase their reach and improve their operational efficiency. In 3 years, we've become one of the very few global product companies from India with 3500+ satisfied restaurants on board. LimeTray's founding team consists of ISB, NSIT alumni with strong-domain knowledge having built one of the largest online food ordering portal before. LimeTray counts very successful internet entrepreneurs as its angel investors & advisors. We are also backed by Matrix Partners, one of the top 5 VC firms in India
Read more
Connect with the team
Profile picture
Sooraj E
Profile picture
tanika monga
Profile picture
Tripta sharma
Profile picture
Bhavya Jain
Profile picture
Akshita Jain
Company social profiles
linkedintwitterfacebook

Similar jobs

Bengaluru (Bangalore)
2 - 6 yrs
₹25L - ₹45L / yr
Data engineering
skill iconData Analytics
Big Data
Apache Spark
airflow
+8 more
2+ years of experience in a Data Engineer role.
● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
Read more
Anicaa Data
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹25L / yr
TensorFlow
PyTorch
skill iconMachine Learning (ML)
skill iconData Science
data scientist
+11 more

Job Title – Data Scientist (Forecasting)

Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications.  The candidate should have experience in training, testing deep learning architectures.  This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.

 

Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)

 

Required Skills:

  • At least 3+ years of experience in a Data Scientist role
  • Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
  • Experience with large data sets, big data, and analytics
  • Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
  • Training Machine Learning (ML) algorithms in areas of forecasting and prediction
  • Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
  • Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
  • Experience in translating business needs into problem statements, prototypes, and minimum viable products
  • Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
  • Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models

Preferred Experience

  • Worked on forecasting projects – both classical and ML models
  • Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
  • Strong background in forecasting accuracy drivers
  • Experience in Advanced Analytics techniques such as regression, classification, and clustering
  • Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
Read more
India Bison
Vanita Deokar
Posted by Vanita Deokar
Remote only
5 - 8 yrs
₹5L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Responsibilities

> Selecting features, building and optimizing classifiers using machine

> learning techniques

> Data mining using state-of-the-art methods

> Extending company’s data with third party sources of information when

> needed

> Enhancing data collection procedures to include information that is

> relevant for building analytic systems

> Processing, cleansing, and verifying the integrity of data used for

> analysis

> Doing ad-hoc analysis and presenting results in a clear manner

> Creating automated anomaly detection systems and constant tracking of

> its performance


Key Skills

> Hands-on experience of analysis tools like R, Advance Python

> Must Have Knowledge of statistical techniques and machine learning

> algorithms

> Artificial Intelligence

> Understanding of Text analysis- Natural Language processing (NLP)

> Knowledge on Google Cloud Platform

> Advanced Excel, PowerPoint skills

> Advanced communication (written and oral) and strong interpersonal

> skills

> Ability to work cross-culturally

> Good to have Deep Learning

> VBA and visualization tools like Tableau, PowerBI, Qliksense, Qlikview

> will be an added advantage

Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
skill iconJava
skill iconPython
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Quicken Inc
at Quicken Inc
2 recruiters
Shreelakshmi M
Posted by Shreelakshmi M
Bengaluru (Bangalore)
5 - 8 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
ETL QA
+1 more
  • Graduate+ in Mathematics, Statistics, Computer Science, Economics, Business, Engineering or equivalent work experience.
  • Total experience of 5+ years with at least 2 years in managing data quality for high scale data platforms.
  • Good knowledge of SQL querying.
  • Strong skill in analysing data and uncovering patterns using SQL or Python.
  • Excellent understanding of data warehouse/big data concepts such data extraction, data transformation, data loading (ETL process).
  • Strong background in automation and building automated testing frameworks for data ingestion and transformation jobs.
  • Experience in big data technologies a big plus.
  • Experience in machine learning, especially in data quality applications a big plus.
  • Experience in building data quality automation frameworks a big plus.
  • Strong experience working with an Agile development team with rapid iterations. 
  • Very strong verbal and written communication, and presentation skills.
  • Ability to quickly understand business rules.
  • Ability to work well with others in a geographically distributed team.
  • Keen observation skills to analyse data, highly detail oriented.
  • Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution.
  • Able to identify stakeholders, build relationships, and influence others to get work done.
  • Self-directed and self-motivated individual who takes complete ownership of the product and its outcome.
Read more
Simplilearn Solutions
at Simplilearn Solutions
1 video
36 recruiters
STEVEN JOHN
Posted by STEVEN JOHN
Anywhere, United States, Canada
3 - 10 yrs
₹2L - ₹10L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
Big Data
Corporate Training
skill iconData Science
+2 more
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Read more
Graphene Services Pte Ltd
Swetha Seshadri
Posted by Swetha Seshadri
Bengaluru (Bangalore)
2 - 5 yrs
Best in industry
skill iconPython
MySQL
SQL
NOSQL Databases
PowerBI
+2 more

About Graphene  

Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best  

Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.  

  

Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.  

Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.  

  

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.   

  

Job title: - Data Analyst 

Job Description  

Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines. 

Responsibilities and Duties  

  • Managing end to end data pipeline from data source to visualization layer 
  • Ensure data integrity; Ability to pre-empt data errors 
  • Organized managing and storage of data 
  • Provide quality assurance of data, working with quality assurance analysts if necessary. 
  • Commissioning and decommissioning of data sets. 
  • Processing confidential data and information according to guidelines. 
  • Helping develop reports and analysis. 
  • Troubleshooting the reporting database environment and reports. 
  • Managing and designing the reporting environment, including data sources, security, and metadata. 
  • Supporting the data warehouse in identifying and revising reporting requirements. 
  • Supporting initiatives for data integrity and normalization. 
  • Evaluating changes and updates to source production systems. 
  • Training end-users on new reports and dashboards. 
  • Initiate data gathering based on data requirements 
  • Analyse the raw data to check if the requirement is satisfied 

 

Qualifications and Skills   

  

  • Technologies required: Python, SQL/ No-SQL database(CosmosDB)     
  • Experience required 2 – 5 Years. Experience in Data Analysis using Python 

  Understanding of software development life cycle   

  • Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI) 
  • Automation steps needed to transform and enrich data.   
  • Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.   
  • Excellent verbal and written communication skills   
Read more
MNC
at MNC
Agency job
via Fragma Data Systems by geeti gaurav mohanty
Bengaluru (Bangalore)
5 - 9 yrs
₹14L - ₹20L / yr
skill iconData Science
Natural Language Processing (NLP)
skill iconR Programming
skill iconPython
Text mining
The Sr. NLP Text Mining Scientist / Engineer will have the opportunity to lead a team, shape team
culture and operating norms as a result of the fast-paced nature of a new, high-growth
organization.
• 7+ years of Industry experience primarily related to Unstructured Text Data and NLP
(PhD work and internships will be considered if they are related to unstructured text
in lieu of industry experience but not more than 2 years will be accounted towards
industry experience)
• Develop Natural Language Medical/Healthcare documents comprehension related
products to support  Health business objectives, products and improve
processing efficiency, reducing overall healthcare costs
• Gather external data sets; build synthetic data and label data sets as per the needs
for NLP/NLR/NLU
• Apply expert software engineering skills to build Natural Language products to
improve automation and improve user experiences leveraging unstructured data storage, Entity Recognition, POS Tagging, ontologies, taxonomies, data mining,
information retrieval techniques, machine learning approach, distributed and cloud
computing platforms
• Own the Natural Language and Text Mining products — from platforms to systems
for model training, versioning, deploying, storage and testing models with creating
real time feedback loops to fully automated services
• Work closely and collaborate with Data Scientists, Machine Learning engineers, IT
teams and Business stakeholders spread out across various locations in US and India
to achieve business goals
• Provide mentoring to other Data Scientist and Machine Learning Engineers
• Strong understanding of mathematical concepts including but not limited to linear
algebra, Advanced calculus, partial differential equations and statistics including
Bayesian approaches
• Strong programming experience including understanding of concepts in data
structures, algorithms, compression techniques, high performance computing,
distributed computing, and various computer architecture
• Good understanding and experience with traditional data science approaches like
sampling techniques, feature engineering, classification and regressions, SVM, trees,
model evaluations
• Additional course work, projects, research participation and/or publications in
Natural Language processing, reasoning and understanding, information retrieval,
text mining, search, computational linguistics, ontologies, semantics
• Experience with developing and deploying products in production with experience
in two or more of the following languages (Python, C++, Java, Scala)
• Strong Unix/Linux background and experience with at least one of the following
cloud vendors like AWS, Azure, and Google for 2+ years
• Hands on experience with one or more of high-performance computing and
distributed computing like Spark, Dask, Hadoop, CUDA distributed GPU (2+ years)
• Thorough understanding of deep learning architectures and hands on experience
with one or more frameworks like tensorflow, pytorch, keras (2+ years)
• Hands on experience with libraries and tools like Spacy, NLTK, Stanford core NLP,
Genism, johnsnowlabs for 5+ years
• Understanding business use cases and be able to translate them to team with a
vision on how to implement
• Identify enhancements and build best practices that can help to improve the
productivity of the team.
Read more
Reval Analytics
at Reval Analytics
2 recruiters
Jyoti Nair
Posted by Jyoti Nair
Pune
3 - 6 yrs
₹5L - ₹9L / yr
skill iconPython
skill iconDjango
Big Data

Position Name: Software Developer

Required Experience: 3+ Years

Number of positions: 4

Qualifications: Master’s or Bachelor s degree in Engineering, Computer Science, or equivalent (BE/BTech or MS in Computer Science).

Key Skills: Python, Django, Ngnix, Linux, Sanic, Pandas, Numpy, Snowflake, SciPy, Data Visualization, RedShift, BigData, Charting

Compensation - As per industry standards.

Joining - Immediate joining is preferrable.

 

Required Skills:

 

  • Strong Experience in Python and web frameworks like Django, Tornado and/or Flask
  • Experience in data analytics using standard python libraries using Pandas, NumPy, MatPlotLib
  • Conversant in implementing charts using charting libraries like Highcharts, d3.js, c3.js, dc.js and data Visualization tools like Plotly, GGPlot
  • Handling and using large databases and Datawarehouse technologies like MongoDB, MySQL, BigData, Snowflake, Redshift.
  • Experience in building APIs, Multi-threading for tasks on Linux platform
  • Exposure to finance and capital markets will be added advantage. 
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Worked on building highly-available distributed systems on cloud infrastructure or have had exposure to architectural pattern of a large, high-scale web application.
  • Strong understanding of software design principles, algorithms, data structures, design patterns, and multithreading concepts.
  • Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3

 

Company Description:

Reval Analytical Services is a fully-owned subsidiary of Virtua Research Inc. US. It is a financial services technology company focused on consensus analytics, peer analytics and Web-enabled information delivery. The Company’s unique combination of investment research experience, modeling expertise, and software development capabilities enables it to provide industry-leading financial research tools and services for investors, analysts, and corporate management.

 

Website: http://www.virtuaresearch.com" target="_blank">www.virtuaresearch.com

Read more
PayU
at PayU
1 video
6 recruiters
Deeksha Srivastava
Posted by Deeksha Srivastava
gurgaon, NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹7L - ₹15L / yr
skill iconPython
skill iconR Programming
skill iconData Analytics
R

What you will be doing:

As a part of the Global Credit Risk and Data Analytics team, this person will be responsible for carrying out analytical initiatives which will be as follows: -

  • Dive into the data and identify patterns
  • Development of end-to-end Credit models and credit policy for our existing credit products
  • Leverage alternate data to develop best-in-class underwriting models
  • Working on Big Data to develop risk analytical solutions
  • Development of Fraud models and fraud rule engine
  • Collaborate with various stakeholders (e.g. tech, product) to understand and design best solutions which can be implemented
  • Working on cutting-edge techniques e.g. machine learning and deep learning models

Example of projects done in past:

  • Lazypay Credit Risk model using CatBoost modelling technique ; end-to-end pipeline for feature engineering and model deployment in production using Python
  • Fraud model development, deployment and rules for EMEA region

 

Basic Requirements:

  • 1-3 years of work experience as a Data scientist (in Credit domain)
  • 2016 or 2017 batch from a premium college (e.g B.Tech. from IITs, NITs, Economics from DSE/ISI etc)
  • Strong problem solving and understand and execute complex analysis
  • Experience in at least one of the languages - R/Python/SAS and SQL
  • Experience in in Credit industry (Fintech/bank)
  • Familiarity with the best practices of Data Science

 

Add-on Skills : 

  • Experience in working with big data
  • Solid coding practices
  • Passion for building new tools/algorithms
  • Experience in developing Machine Learning models
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos