Cutshort logo
Celebal Technologies logo
Technical Project Manager
Technical Project Manager
Celebal Technologies's logo

Technical Project Manager

Payal Hasnani's profile picture
Posted by Payal Hasnani
5 - 15 yrs
₹7L - ₹25L / yr
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
Cloud Computing
NOSQL Databases
Apache Hive
Apache Spark
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Celebal Technologies

Founded :
2015
Type
Size :
1000-5000
Stage :
Profitable
About

Celebal Technologies is a leading software services company that specializes in Data Science, Big Data, and Enterprise Cloud solutions. With a focus on intelligent data solutions, Celebal Technologies helps organizations gain a competitive edge by leveraging cutting-edge technology to extract intelligence and patterns from data. The company's core offerings revolve around Data to Intelligence, which facilitates smarter and quicker decision-making for clients. Celebal Technologies' solutions are powered by Robotics, Artificial Intelligence, and Machine Learning algorithms, which offer improved business efficiency in the interconnected world. The company's clients include Product ISVs and delivery organizations across the globe, who benefit from Celebal Technologies' niche expertise in product development and Enterprise project implementations. Celebal Technologies is committed to delivering the highest standards of service quality and operational excellence, enabling clients across a wide range of industries to transform into a data-driven enterprise. The company's tailor-made solutions help enterprises maximize productivity, speed, and accuracy, making Celebal Technologies a trusted partner for businesses looking to gain a competitive edge in today's data-driven world.

Read more
Connect with the team
Profile picture
Ruchi Ahuja
Profile picture
Payal Hasnani
Company social profiles
N/A

Similar jobs

Data Sutram
Ankit Das
Posted by Ankit Das
Mumbai, Gurugram
2 - 10 yrs
Best in industry
skill iconData Science
skill iconPython
skill iconData Analytics
Pipeline management
Cloud Computing
+7 more
Data Sutram, funded by India Infoline (IIFL), Indian Angel Network (IAN) and 100x.VC (led by Sanjay Mehta), is an alternate data company, using external data feeds to create every location's DNA that is utilized in various use cases like credit underwriting, location profiling, site selection, etc. It is one of the fastest-growing companies in the space of Artificial Intelligence & Location Analytics in India. As a data scientist, you get to work in our core product and work on various critical client use cases. As a data scientist, you get to explore new solutions and use cases collaborating with Business Analysts & fellow Data Scientists.

Roles and Responsibilities

  • Managing available resources such as hardware, data, and personnel so that deadlines are met.
  • Analyzing the ML and Deep Learning algorithms that could be used to solve a given problem and ranking them by their success probabilities
  • Exploring data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world
  • Defining validation framework and establish a process to ensure acceptable data quality criteria are met
  • Supervising the data acquisition and partnership roadmaps to create stronger product for our customers.
  • Defining feature engineering process to ensure usage of meaningful features given the business constraints which may vary by market
  • Device self-learning strategies through analysis of errors from the models
  • Understand business issues and context, devise a framework for solving unstructured problems and articulate clear and actionable solutions underpinned by analytics.
  • Manage multiple projects simultaneously while demonstrating business leadership to collaborate & coordinate with different functions to deliver the solutions in a timely, efficient and effective manner.
  • Manage project resources optimally to deliver projects on time; drive innovation using residual resources to create strong solution pipeline; provide direction, coaching & training, feedbacks to project team members to enhance performance, support development and encourage value aligned behaviour of the project team members; Provide inputs for periodic performance appraisal of project team members.

 

Preferred Technical & Professional expertise

  • Undergraduate Degree in Computer Science / Engineering / Mathematics / Statistics / economics or other quantitative fields
  • At least 2+ years of experience of managing Data Science projects with specializations in Machine Learning
  • In-depth knowledge of cloud analytics tools.
  • Able to drive Python Code optimization; ability review codes and provide inputs to improve the quality of codes
  • Ability to evaluate hardware selection for running ML models for optimal performance
  • Up to date with Python libraries and versions for machine learning; Extensive hands-on experience with Regressors; Experience working with data pipelines.
  • Deep knowledge of math, probability, statistics and algorithms; Working knowledge of Supervised Learning, Adversarial Learning and Unsupervised learning
  • Deep analytical thinking with excellent problem-solving abilities
  • Strong verbal and written communication skills with a proven ability to work with all levels of management; effective interpersonal and influencing skills.
  • Ability to manage a project team through effectively allocation of tasks, anticipating risks and setting realistic timelines for managing the expectations of key stakeholders
  • Strong organizational skills and an ability to balance and handle multiple concurrent tasks and/or issues simultaneously.
  • Ensure that the project team understand and abide by compliance framework for policies, data, systems etc. as per group, region and local standards
Read more
Fragma Data Systems
at Fragma Data Systems
8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore)
3.5 - 8 yrs
₹5L - ₹18L / yr
PySpark
Data engineering
Data Warehouse (DWH)
SQL
Spark
+1 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
DataMetica
at DataMetica
1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
3 - 12 yrs
₹5L - ₹25L / yr
Apache Kafka
Big Data
Hadoop
Apache Hive
skill iconJava
+1 more

Summary
Our Kafka developer has a combination of technical skills, communication skills and business knowledge. The developer should be able to work on multiple medium to large projects. The successful candidate will have excellent technical skills of Apache/Confluent Kafka, Enterprise Data WareHouse preferable GCP BigQuery or any equivalent Cloud EDW and also will be able to take oral and written business requirements and develop efficient code to meet set deliverables.

 

Must Have Skills

  • Participate in the development, enhancement and maintenance of data applications both as an individual contributor and as a lead.
  • Leading in the identification, isolation, resolution and communication of problems within the production environment.
  • Leading developer and applying technical skills Apache/Confluent Kafka (Preferred) AWS Kinesis (Optional), Cloud Enterprise Data Warehouse Google BigQuery (Preferred) or AWS RedShift or SnowFlakes (Optional)
  • Design recommending best approach suited for data movement from different sources to Cloud EDW using Apache/Confluent Kafka
  • Performs independent functional and technical analysis for major projects supporting several corporate initiatives.
  • Communicate and Work with IT partners and user community with various levels from Sr Management to detailed developer to business SME for project definition .
  • Works on multiple platforms and multiple projects concurrently.
  • Performs code and unit testing for complex scope modules, and projects
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)
  • Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.
  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector,  JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • Working knowledge on Kafka Rest proxy.
  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms.  Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem. 
  • Use automation tools like provisioning using Jenkins, Udeploy or relevant technologies
  • Ability to perform data related benchmarking, performance analysis and tuning.
  • Strong skills in In-memory applications, Database Design, Data Integration.
Read more
DataMetica
at DataMetica
1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
7 - 12 yrs
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
Bengaluru (Bangalore)
1 - 5 yrs
₹15L - ₹20L / yr
Spark
Big Data
Data Engineer
Hadoop
Apache Kafka
+4 more
  • 1-5 years of experience in building and maintaining robust data pipelines, enriching data, low-latency/highly-performance data analytics applications.
  • Experience handling complex, high volume, multi-dimensional data and architecting data products in streaming, serverless, and microservices-based Architecture and platform.
  • Experience in Data warehousing, Data modeling, and Data architecture.
  • Expert level proficiency with the relational and NoSQL databases.
  • Expert level proficiency in Python, and PySpark.
  • Familiarity with Big Data technologies and utilities (Spark, Hive, Kafka, Airflow).
  • Familiarity with cloud services (preferable AWS)
  • Familiarity with MLOps processes such as data labeling, model deployment, data-model feedback loop, data drift.

Key Roles/Responsibilities:

  • Act as a technical leader for resolving problems, with both technical and non-technical audiences.
  • Identifying and solving issues with data pipelines regarding consistency, integrity, and completeness.
  • Lead data initiatives, architecture design discussions, and implementation of next-generation BI solutions.
  • Partner with data scientists, tech architects to build advanced, scalable, efficient self-service BI infrastructure.
  • Provide thought leadership and mentor data engineers in information presentation and delivery.

 

 

Read more
Recko
at Recko
1 recruiter
Agency job
via Zyoin Web Private Limited by Chandrakala M
Bengaluru (Bangalore)
3 - 7 yrs
₹16L - ₹40L / yr
Big Data
Hadoop
Spark
Apache Hive
Data engineering
+6 more

Recko Inc. is looking for data engineers to join our kick-ass engineering team. We are looking for smart, dynamic individuals to connect all the pieces of the data ecosystem.

 

What are we looking for:

  1. 3+  years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and experience in working with Big Data technologies like Big Data frameworks/platforms/data stores like Hadoop, HDFS, Spark, Oozie, Hue, EMR, Scala, Hive, Glue, Kerberos etc.

  2. Strong experience setting up data warehouses, data modeling, data wrangling and dataflow architecture on the cloud

  3. 2+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/ Python etc

  4. 2+ years of development experience in Amazon Redshift, Google Bigquery or Azure data warehouse platforms preferred

  5. Knowledge of statistical analysis tools like R, SAS etc 

  6. Familiarity with any data visualization software

  7. A growth mindset and passionate about building things from the ground up and most importantly, you should be fun to work with

As a data engineer at Recko, you will:

  1. Create and maintain optimal data pipeline architecture,

  2. Assemble large, complex data sets that meet functional / non-functional business requirements.

  3. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  4. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

  5. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

  6. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

  7. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

  8. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

  9. Work with data and analytics experts to strive for greater functionality in our data systems.

 

About Recko: 

Recko was founded in 2017 to organise the world’s transactional information and provide intelligent applications to finance and product teams to make sense of the vast amount of data available. With the proliferation of digital transactions over the past two decades, Enterprises, Banks and Financial institutions are finding it difficult to keep a track on the money flowing across their systems. With the Recko Platform, businesses can build, integrate and adapt innovative and complex financial use cases within the organization and  across external payment ecosystems with agility, confidence and at scale.  . Today, customer-obsessed brands such as Deliveroo, Meesho, Grofers, Dunzo, Acommerce, etc use Recko so their finance teams can optimize resources with automation and prioritize growth over repetitive and time-consuming tasks around day-to-day operations. 

 

Recko is a Series A funded startup, backed by marquee investors like Vertex Ventures, Prime Venture Partners and Locus Ventures. Traditionally enterprise software is always built around functionality. We believe software is an extension of one’s capability, and it should be delightful and fun to use.

 

Working at Recko: 

We believe that great companies are built by amazing people. At Recko, We are a group of young Engineers, Product Managers, Analysts and Business folks who are on a mission to bring consumer tech DNA to enterprise fintech applications. The current team at Recko is 60+ members strong with stellar experience across fintech, e-commerce, digital domains at companies like Flipkart, PhonePe, Ola Money, Belong, Razorpay, Grofers, Jio, Oracle etc. We are growing aggressively across verticals.

Read more
Maveric Systems
at Maveric Systems
3 recruiters
Rashmi Poovaiah
Posted by Rashmi Poovaiah
Bengaluru (Bangalore), Chennai, Pune
4 - 10 yrs
₹8L - ₹15L / yr
Big Data
Hadoop
Spark
Apache Kafka
HiveQL
+2 more

Role Summary/Purpose:

We are looking for a Developer/Senior Developers to be a part of building advanced analytical platform leveraging Big Data technologies and transform the legacy systems. This role is an exciting, fast-paced, constantly changing and challenging work environment, and will play an important role in resolving and influencing high-level decisions.

 

Requirements:

  • The candidate must be a self-starter, who can work under general guidelines in a fast-spaced environment.
  • Overall minimum of 4 to 8 year of software development experience and 2 years in Data Warehousing domain knowledge
  • Must have 3 years of hands-on working knowledge on Big Data technologies such as Hadoop, Hive, Hbase, Spark, Kafka, Spark Streaming, SCALA etc…
  • Excellent knowledge in SQL & Linux Shell scripting
  • Bachelors/Master’s/Engineering Degree from a well-reputed university.
  • Strong communication, Interpersonal, Learning and organizing skills matched with the ability to manage stress, Time, and People effectively
  • Proven experience in co-ordination of many dependencies and multiple demanding stakeholders in a complex, large-scale deployment environment
  • Ability to manage a diverse and challenging stakeholder community
  • Diverse knowledge and experience of working on Agile Deliveries and Scrum teams.

 

Responsibilities

  • Should works as a senior developer/individual contributor based on situations
  • Should be part of SCRUM discussions and to take requirements
  • Adhere to SCRUM timeline and deliver accordingly
  • Participate in a team environment for the design, development and implementation
  • Should take L3 activities on need basis
  • Prepare Unit/SIT/UAT testcase and log the results
  • Co-ordinate SIT and UAT Testing. Take feedbacks and provide necessary remediation/recommendation in time.
  • Quality delivery and automation should be a top priority
  • Co-ordinate change and deployment in time
  • Should create healthy harmony within the team
  • Owns interaction points with members of core team (e.g.BA team, Testing and business team) and any other relevant stakeholders
Read more
1CH
at 1CH
1 recruiter
Sathish Sukumar
Posted by Sathish Sukumar
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
4 - 15 yrs
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more
Data Team
Agency job
via Oceanworld by Chandan J
Remote only
8 - 12 yrs
₹10L - ₹20L / yr
Big Data
Data engineering
Hadoop
data engineer
Apache Hive
+1 more
Senior Data Engineer (SDE)

(Hadoop, HDFS, Kafka, Spark, Hive)

Overall Experience - 8 to 12 years

Relevant exp on Big data - 3+ years in above

Salary: Max up-to 20LPA 

Job location - Chennai / Bangalore / 

Notice Period - Immediate joiner / 15-to-20-day Max 

The Responsibilities of The Senior Data Engineer Are:

- Requirements gathering and assessment

- Breakdown complexity and translate requirements to specification artifacts and story boards to build towards, using a test-driven approach

- Engineer scalable data pipelines using big data technologies including but not limited to Hadoop, HDFS, Kafka, HBase, Elastic

- Implement the pipelines using execution frameworks including but not limited to MapReduce, Spark, Hive, using Java/Scala/Python for application design.

- Mentoring juniors in a dynamic team setting

- Manage stakeholders with proactive communication upholding TheDataTeam's brand and values

A Candidate Must Have the Following Skills:

- Strong problem-solving ability

- Excellent software design and implementation ability

- Exposure and commitment to agile methodologies

- Detail oriented with willingness to proactively own software tasks as well as management tasks, and see them to completion with minimal guidance

- Minimum 8 years of experience

- Should have experience in full life-cycle of one big data application

- Strong understanding of various storage formats (ORC/Parquet/Avro)

- Should have hands on experience in one of the Hadoop distributions (Hortoworks/Cloudera/MapR)

- Experience in at least one cloud environment (GCP/AWS/Azure)

- Should be well versed with at least one database (MySQL/Oracle/MongoDB/Postgres)

- Bachelor's in Computer Science, and preferably, a Masters as well - Should have good code review and debugging skills

Additional skills (Good to have):

- Experience in Containerization (docker/Heroku)

- Exposure to microservices

- Exposure to DevOps practices - Experience in Performance tuning of big data applications
Read more
Poker Yoga
at Poker Yoga
3 recruiters
Anuj Kumar Kodam
Posted by Anuj Kumar Kodam
Bengaluru (Bangalore)
2 - 4 yrs
₹13L - ₹18L / yr
skill iconElastic Search
skill iconMongoDB
NOSQL Databases
skill iconRedis
Relational Database (RDBMS)
At Poker Yoga we aim to make poker a tool towards self transformation. By providing the necessary tools to improve his skill, necessary learning frame work to bring skill to the core of his game approach and experiences to enhance his perception. We are looking at passionate coders who love building products that speak for themselves. It's an invitation to join a family, not a company. Looking forward to work with you!
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos