Cutshort logo
IceCream Labs logo
Full Stack Developer
Full Stack Developer
IceCream Labs's logo

Full Stack Developer

Madhu Konety's profile picture
Posted by Madhu Konety
4 - 10 yrs
₹6L - ₹12L / yr
Bengaluru (Bangalore)
Skills
node.js
rest api
skill iconMongoDB
skill iconAngularJS (1.x)
sails
skill iconHTML/CSS
Software design
Software Development
We are looking for a complete stack developer. You will build micro-services with rest api and convert UX designs into highly quality UIs. Must have developed large complex system using node.js, angular, mongo, Rest API Must know linux, git Must know software design Must be a good communicator and work across a distributed team
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About IceCream Labs

Founded :
2015
Type
Size
Stage
About
N/A
Connect with the team
Profile picture
Poornima Bm
Profile picture
Madhu Konety
Profile picture
Poornima BM
Company social profiles
N/A

Similar jobs

Gipfel & Schnell Consultings Pvt Ltd
Remote only
4 - 6 yrs
₹8L - ₹15L / yr
skill iconHTML/CSS
skill iconJavascript
skill iconXML
Web Scraping
Data extraction
+1 more
  • Overall, 4-5 years of experience
  • At least 3 years of HTML/CSS development experience is required
  • Must have solid working experience on HTML5, CSS3, XML
  • Should be an expert on XPath/Regex expressions for complex website navigations
  • Working knowledge on core JavaScript
  • Experience of mentoring the junior team members will be desirable
  • Must have good communication skills
Read more
Acuity Knowledge Partners
at Acuity Knowledge Partners
2 candid answers
1 video
Gangadhar S
Posted by Gangadhar S
Bengaluru (Bangalore)
4 - 9 yrs
₹16L - ₹40L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
CI/CD
skill iconMongoDB
MLOps
+1 more

Job Responsibilities:

1. Develop/debug applications using Python.

2. Improve code quality and code coverage for existing or new program.

3. Deploy and Integrate the Machine Learning models.

4. Test and validate the deployments.

5. ML Ops function.


Technical Skills

1. Graduate in Engineering or Technology with strong academic credentials

2. 4 to 8 years of experience as a Python developer.

3. Excellent understanding of SDLC processes

4. Strong knowledge of Unit testing, code quality improvement

5. Cloud based deployment and integration of applications/micro services.

6. Experience with NoSQL databases, such as MongoDB, Cassandra

7. Strong applied statistics skills

8. Knowledge of creating CI/CD pipelines and touchless deployment.

9. Knowledge about API, Data Engineering techniques.

10. AWS

11. Knowledge of Machine Learning and Large Language Model.


Nice to Have

1. Exposure to financial research domain

2. Experience with JIRA, Confluence

3. Understanding of scrum and Agile methodologies

4. Experience with data visualization tools, such as Grafana, GGplot, etc

Read more
India's best Short Video App
Bengaluru (Bangalore)
4 - 12 yrs
₹25L - ₹50L / yr
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
+26 more
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
skill iconJava
skill iconPython
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
Play Games24x7
at Play Games24x7
2 recruiters
Agency job
via zyoin by Deepana Shahabadi
Remote, Bengaluru (Bangalore)
4 - 8 yrs
₹15L - ₹30L / yr
skill iconPython
DBA
skill iconMongoDB
MySQL
Cassandra
+1 more
Games24x7 was one of the first entrants in the gaming industry in 2006, when India started showing the first signs of promise for online gaming. We turned profitable by 2010 in just four years and grew 200x in the next decade. We are a technology powered analytics and data science company that happens to love games!
In 2018-19, the mobile games market in India generated over $600 million in revenues. With close to 450 people in its Mumbai and Bangalore offices, Games24x7 is India’s largest mobile games business today and is very well positioned to become the 800-pound gorilla of what will be a $2 billion market by 2022. While Games24x7 continues to invest aggressively in its India centric mobile games, it is also diversifying its business by investing in international gaming and other tech opportunities.

Summary of Role
Position/Role Description :
The candidate will be part of a team managing databases (MySQL, MongoDB, Cassandra) and will be involved in designing, configuring and maintaining databases.
Job Responsibilities:
• Complete involvement in the database requirement starting from the design phase for every project.
• Deploying required database assets on production (DDL, DML)
• Good understanding of MySQL Replication (Master-slave, Master-Master, GTID-based)
• Understanding of MySQL partitioning.
• A better understanding of MySQL logs and Configuration.
• Ways to schedule backup and restoration.
• Good understanding of MySQL versions and their features.
• Good understanding of InnoDB-Engine.
• Exploring ways to optimize the current environment and also lay a good platform for new projects.
• Able to understand and resolve any database related production outages.

Job Requirements:
• BE/B.Tech from a reputed institute
• Experience in python scripting.
• Experience in shell scripting.
• General understanding of system hardware.
• Experience in MySQL is a must.
• Experience in MongoDB, Cassandra, Graph db will be preferred.
• Experience with Pecona MySQL tools.
• 6 - 8 years of experience.

Job Location: Bengaluru
Read more
EASEBUZZ
at EASEBUZZ
1 recruiter
Amala Baby
Posted by Amala Baby
Pune
2 - 4 yrs
₹2L - ₹20L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+12 more

Company Profile:

 

Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.

 

We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.

 

Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.

 


Salary: As per company standards.

 

Designation: Data Engineering

 

Location: Pune

 

Experience with ETL, Data Modeling, and Data Architecture

Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.

Experience with AWS cloud data lake for development of real-time or near real-time use cases

Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing

Build data pipeline frameworks to automate high-volume and real-time data delivery

Create prototypes and proof-of-concepts for iterative development.

Experience with NoSQL databases, such as DynamoDB, MongoDB etc

Create and maintain optimal data pipeline architecture,

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.


Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow

Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.

 

Employment Type

Full-time

 

Read more
YourHRfolks
at YourHRfolks
6 recruiters
Bharat Saxena
Posted by Bharat Saxena
Remote, Jaipur, NCR (Delhi | Gurgaon | Noida), Chennai, Bangarmau
5 - 10 yrs
₹15L - ₹30L / yr
Big Data
Hadoop
Spark
Apache Kafka
skill iconAmazon Web Services (AWS)
+2 more

Position: Big Data Engineer

What You'll Do

Punchh is seeking to hire Big Data Engineer at either a senior or tech lead level. Reporting to the Director of Big Data, he/she will play a critical role in leading Punchh’s big data innovations. By leveraging prior industrial experience in big data, he/she will help create cutting-edge data and analytics products for Punchh’s business partners.

This role requires close collaborations with data, engineering, and product organizations. His/her job functions include

  • Work with large data sets and implement sophisticated data pipelines with both structured and structured data.
  • Collaborate with stakeholders to design scalable solutions.
  • Manage and optimize our internal data pipeline that supports marketing, customer success and data science to name a few.
  • A technical leader of Punchh’s big data platform that supports AI and BI products.
  • Work with infra and operations team to monitor and optimize existing infrastructure 
  • Occasional business travels are required.

What You'll Need

  • 5+ years of experience as a Big Data engineering professional, developing scalable big data solutions.
  • Advanced degree in computer science, engineering or other related fields.
  • Demonstrated strength in data modeling, data warehousing and SQL.
  • Extensive knowledge with cloud technologies, e.g. AWS and Azure.
  • Excellent software engineering background. High familiarity with software development life cycle. Familiarity with GitHub/Airflow.
  • Advanced knowledge of big data technologies, such as programming language (Python, Java), relational (Postgres, mysql), NoSQL (Mongodb), Hadoop (EMR) and streaming (Kafka, Spark).
  • Strong problem solving skills with demonstrated rigor in building and maintaining a complex data pipeline.
  • Exceptional communication skills and ability to articulate a complex concept with thoughtful, actionable recommendations.
Read more
Elucidata Corporation
at Elucidata Corporation
3 recruiters
Bhuvnesh Sharma
Posted by Bhuvnesh Sharma
Remote, NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹15L - ₹20L / yr
Big Data
skill iconJavascript
skill iconAngularJS (1.x)
skill iconReact.js
About Elucidata:Our mission is to make data-driven understanding of disease, the default starting point in the drug discovery process. Our products & services further the understanding of the ways in which diseased cells are different from healthy ones. This understanding helps scientists discover new drugs in a more effective manner and complements the move towards personalization.Biological big data will outpace data generated by YouTube and Twitter by 10x in the next 7 yrs. Our platform Polly will enable scientists to process different kinds of biological data and generate insights from them to accelerate drug discovery. Polly is already being used at premier biopharma companies like Pfizer and Agios; and academic labs at Yale, MIT, Washington University.We are looking for teammates who think out-of-the-box and are not satisfied with quick fixes or canned solutions to our industry’s most challenging problems. If you seek an intellectually stimulating environment where you can have a major impact on a critically important industry, we’d like to talk to you.About RoleWe are looking for engineers who want to build data rich applications and love the end-to-end product journey from understanding customer needs to the final product.Key Responsibilities- Developing web applications to visualize and process scientific data. - Interacting with Product, Design and Engineering teams to spec, build, test and deploy new features. - Understanding user needs and the science behind it.- Mentoring junior developersRequirements- Minimum 3-4 years of experience working in web development- In-depth knowledge of JavaScript- Hands-on experience with modern frameworks (Angular, React) - Sound programming and computer science fundamentals- Good understanding of web architecture and single page applications You might be a great cultural fit for Elucidata if..- You are passionate for Science.- You are a self-learner who wants to keep learning everyday. - You regard your code as your craft that you want to keep honing. - You like to work hard to solve big challenges and enjoy the process of breaking down a problem one blow at a time. - You love science and can't stop being the geek at a party. Of course you party harder than everybody else there.
Read more
Mintifi
at Mintifi
3 recruiters
Suchita Upadhyay
Posted by Suchita Upadhyay
Mumbai
2 - 4 yrs
₹6L - ₹15L / yr
Big Data
Hadoop
MySQL
skill iconMongoDB
YARN
Job Title: Software Developer – Big Data Responsibilities We are looking for a Big Data Developer who can drive innovation and take ownership and deliver results. • Understand business requirements from stakeholders • Build & own Mintifi Big Data applications • Be heavily involved in every step of the product development process, from ideation to implementation to release. • Design and build systems with automated instrumentation and monitoring • Write unit & integration tests • Collaborate with cross functional teams to validate and get feedback on the efficacy of results created by the big data applications. Use the feedback to improve the business logic • Proactive approach to turn ambiguous problem spaces into clear design solutions. Qualifications • Hands-on programming skills in Apache Spark using Java or Scala • Good understanding about Data Structures and Algorithms • Good understanding about relational and non-relational database concepts (MySQL, Hadoop, MongoDB) • Experience in Hadoop ecosystem components like YARN, Zookeeper would be a strong plus
Read more
Bengaluru (Bangalore)
3 - 12 yrs
₹3L - ₹25L / yr
skill iconJava
skill iconPython
Spark
Hadoop
skill iconMongoDB
+3 more
We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - [email protected]. I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos