
Data Engineer
at Cloud infrastructure solutions and support company. (SE1)
- Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
- Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
- Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
- Build data pipelines that clean, transform, and aggregate data from disparate sources
- Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
- Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
- Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
- Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.
Job Qualifications:
- Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
- 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
- Technical expertise with data models, data mining.
- Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
- Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
- Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
- Hands-on knowledge in SQL and No-SQL database design.
- Having knowledge in CI/CD for the building and hosting of the solutions.
- Having AWS certification is an added advantage.
- Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
- A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
- Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
- A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists

Similar jobs
Experience: 3–7 Years
Locations: Pune / Bangalore / Mumbai
Notice Period :Immediate joiner only
Employment Type: Full-time
🛠️ Key Skills (Mandatory):
- Python: Strong coding skills for data manipulation and automation.
- PySpark: Experience with distributed data processing using Spark.
- SQL: Proficient in writing complex queries for data extraction and transformation.
- Azure Databricks: Hands-on experience with notebooks, Delta Lake, and MLflow
Interested candidates please share resume with details below.
Total Experience -
Relevant Experience in Python,Pyspark,AQL,Azure Data bricks-
Current CTC -
Expected CTC -
Notice period -
Current Location -
Desired Location -
🌟 HR Internship (Unpaid Training Period – Coimbatore)
Company: KGiSL MicroCollege (Part of the KG Group)
Location: Coimbatore
Duration: 3 Months (Training Period – Unpaid)
Post-Training: Eligible for salary upon successful completion and performance
About Us
KGiSL MicroCollege, part of the renowned KG Group, is committed to empowering fresh graduates through skill-based learning and real-time industry exposure. We offer hands-on training and growth opportunities in various corporate domains.
Internship Overview
We are looking for enthusiastic freshers who are eager to build a career in Human Resources (HR). This internship offers end-to-end HR training and hands-on experience in real-time HR functions.
What You’ll Learn
- Recruitment process (sourcing, screening, scheduling interviews)
- Onboarding and induction procedures
- Employee engagement activities
- HR operations and documentation
- Payroll basics and compliance overview
- Performance management systems
- HR policies and process implementation
Eligibility
- Fresh graduates (any discipline) passionate about pursuing a career in HR
- Strong communication and interpersonal skills
- Eager to learn and grow in a corporate environment
Training Details
- Duration: 3 Months (Unpaid)
- Type: Full-time, On-site (Coimbatore)
- Post Training: Stipend/salary based on performance after successful completion
Why Join Us
- Get real-world HR experience and professional mentorship
- Gain exposure to corporate HR practices
- Opportunity to get absorbed into the HR team after the internship period
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Looking for Python with React.
Python frameworks like Django or Flask.
Develop RESTful APIs or GraphQL endpoints
We are an IT recruitment service provider based out of Gurgaon. We are partners with various established names like Wipro, Aon, Infinite Computer Solutions, IBM, TCS, Fiserv, Accenture, and many more.
As discussed, Please find the Job Description for the position of Salesforce Project Manager.
JD for Salesforce Tech Lead
Exp level: 7+ yrs.
Location: Pune, Bangalore, Hyderabad, Chennai, Jaipur
NP: Immediate to 15 days
Skills :
Salesforce developer, LWC, Rest API, Lead experience
Job description
Client: MNC (WFO)
Role: ANGULAR DOT NET DEVELOPER
Exp: 5 years to 8 years
CTC : UPTO 28 lpa.
NOTE:Serving notice period (30days/immediate joiner.
Candidates from Bangalore location only apply.
Hands-on working knowledge and experience is required in:
a. Extensive experience working on C#, .net and .net core frameworks
b. Relational Databases (SQL Server, etc.)
c. Experience with Angular is a must
d. Agile Methodologies (Scrum, TDD, BDD, etc.)
e. Experience working with distributed teams across regions and time zones
f. Strong organizational skills
g. Display detailed, critical, quality-oriented, skeptical thinking about the product
• Experience with several of the following tools/technologies is desirable:
a. GIT, Jira, Jenkins, SharePoint, Visual Studio Code.
b. Microservices Architecture, Domain Driver Design, Test Driven Development is a must
c. Design Patterns and implementing the Design Patterns
d. Development of Complex Application and System Architectures
e. Data Structures and Algorithms using C# and .NET
f. Experience working in Azure Cloud will be a big bonus as all our systems are in the Cloud
a. Azure App Services
b. Azure Functions
c. Azure Cosmos
d. Azure K8S
• Knowledge of the following technologies is a plus:
a. Continuous Integration and Continuous Delivery Tools like Azure DevOps, GitHub, Git, etc.
b. Containerization Technologies(DOCKER).
Industry
- Information Technology & Services
Employment Type
Full-time
- Should have 5+ hardcore experience in Android Mobile App development
- Proficient in Kotlin, Android SDK with Location tracking, Route map, Google Map/ Voice API Integration, Regional language support etc.
- Understanding of design patterns and mobile architecture using frameworks such as MVVM / MVC / MVP.
- Familiarity with RESTful APIs and integration and code versioning tools such as Git.
- Problem solver with good analytical skills.
- Passion to work in dynamic, start-up environment.
- Compensation best in the industry
- Has experience of working in a graphics environment, to a brief and delivering to deadlines with some evidence of design management capability
- Familiarity with best practices for graphics and video content on social media platforms (YouTube, LinkedIn, Facebook, Twitter, Instagram)
- Should optimize existing graphics and video content
- Use of Microsoft Office, Dropbox, and G Suite.
- Has advanced knowledge of Adobe Creative Suite (mainly InDesign, Illustrator, Premiere, and Photoshop)
Candidate Specification
- A strong portfolio of illustrations or other graphics.
- A keen eye for aesthetics and details.
- Communication skills.
- A degree in Design, Fine Arts, or a related field is a plus.
- Is self-starting, highly motivated, and hands-on
- Creativity and awareness of current design trends
- Can manage time efficiently, able to prioritize, works fast and accurately to meet tight deadlines
Product Manager
Location : Gurgaon
Education Qualification : B.Tech/M.Tech, MBA from Tier 1 institute preferred
Skills Required : Define product line, identify market requirements, define the product vision, creating preliminary design concepts and drive the implementation of the product roadmap.
Experience : 3 to 6 years
We are looking to hire a smart product manager who is passionate about taking up ideas and
bringing them to life. You will engage with our customer base directly, and decide the future of the product by working closely with key partners. You will also collaborate with marketing, sales and support to ensure smooth product roll out and user engagement.
Responsibilities:
- As a product manager, you will take end to end ownership of a product line, identify market requirements, define the product vision, creating preliminary design concepts and drive the implementation of the product roadmap.
- The perfect candidate has to be a strategic thinker who is extremely analytical and detail oriented and great at multitasking and prioritizing.
- Analyze the effectiveness of the current features and define ways to improve them in order to reduce failure rates, increase usage and improve booking conversion rates based on analytics data and good understanding of customer needs.
- Work closely with the design & development teams to explain requirements of new features and change requests for current features.
- Closely manage roll out of all product requirements with focus on both quality and time to market.
- In charge of competitor analysis, market analysis, product analysis, business cases, product evolution, feature prioritization, high level requirement gathering, and product marketing.
Requirements :
- Works well under pressure has the ability to work within stringent timelines and to collaborate across departments to achieve results.
- Must be a true problem solver with outstanding skills in discovery techniques and the proven ability to translate the underlying business needs of requests into actionable items.
- A strong focus on detail, with the proven ability to ensure data and related reports are thorough and totally reliable.
- A demonstrated enthusiasm for learning and working with new tools and technologies is essential.
- Experience with business users and able to translate business requirements into software requirements.
- B.E./B.Tech a must. MBA from premier institutes preferred
Designation - SDE II / III (3D team)
About Livspace
Livspace is India’s trusted interior design and renovation platform that connects interior designers, homeowners and vendors. For homeowners, Livspace is their one-stop destination for all things interiors. For interior designers and vendors, we’ve streamlined their workflow from design all the way to delivery through powerful and innovative technology.
We’re currently in nine Indian metro areas. We’ve made over 20,000 customers happy by delivering their dream homes to them. With over 3,500 interior designers on board, we’re the largest design community India has seen. We employ over 2000 passionate individuals who continue to grow and be a part of this exciting journey.
If you value autonomy, enjoy challenges, believe in getting things done and can work with minimal supervision, come join us.
Skills required (Must haves):
- Should have worked on Javascript for minimum 2 years. Frontend(Preferred) / Backend
- Should have knowledge of Web development
- Should have good logical thinking and able to code using best practices
- Should have knowledge of Angular JS or any MVC framework
Good to Have (Plus Points):
- Some knowledge of Python/java
- Some knowledge on Database (MySQL)
- Some knowledge of Three.js & 3D rendering tools in web-like blender, 3dsmax etc.
- Some knowledge of Geometry/mathematics knowledge
- Some functional knowledge of CSS Styling, though we are not looking for UI developer
What you will be working on:
- Active development of web based 3D Design tool which is used be interior designers to create awesome designs
- Creation of builder tool which helps in creating dynamic products (variable dimensions and finished) run time
- Development of rule engine which take care of ever growing product & design rules








