We at Datametica Solutions Private Limited are looking for SQL Engineers who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description
Experience : 4-10 years
Location : Pune
Mandatory Skills -
- Strong in ETL/SQL development
- Strong Data Warehousing skills
- Hands-on experience working with Unix/Linux
- Development experience in Enterprise Data warehouse projects
- Good to have experience working with Python, shell scripting
Opportunities -
- Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and Kafka
- Would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
http://www.datametica.com/">www.datametica.com
About DataMetica
About
Company video
Photos
Connect with the team
Similar jobs
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Company Overview
Universal Transit is a leading auto transport company providing AI-powered car shipping solutions across the United States. Our services span nationwide car shipping with advanced technology, ensuring optimized, efficient, and cost-effective transport. Universal Transit prioritizes customer satisfaction and innovative service and is committed to transforming the auto transport industry through technology-driven solutions.
Position Summary
We are seeking a dynamic Sales Representative with proven sales experience in the U.S. market. While industry knowledge in auto transport is a plus, it is not required. We’re looking for someone with a strong sales background in any service-oriented industry who excels in client engagement and lead conversion. Proficiency in English at the C2 level, with a clear accent close to American, is essential as this role involves building rapport with a diverse U.S.-based client base. In this role, you’ll prospect for new business opportunities while managing and growing relationships with established clients. You’ll connect with key players in the automotive industry, including dealerships, rental car companies, and automotive auctions, to present our tailored solutions.
Job Title: Sales Representative
Location: Remote India (Initially for 3-6 months, then onsite in Bengaluru).
Job Type: Full-time
Compensation: Starting salary of $800 per month, plus a 10% commission on sales. With experience, there is the potential to earn up to $3,000 per month.
Key Responsibilities
- Actively seek out new business opportunities by making 100 + outbound calls daily, networking, and conducting online research to maintain a strong pipeline of potential clients.
- Build lasting relationships with clients by understanding their needs and providing tailored transport solutions that fit their unique requirements.
- Regularly engage with key clients, including automotive dealerships, dealership groups, wholesalers, auction houses, and insurance companies, to promote Universal Transit’s services.
- Utilize CRM software to document client interactions, monitor sales activities, and keep customer information up to date.
- Collaborate with the sales team to achieve sales targets and expand market reach through effective sales strategies.
- Stay informed about industry trends, competitor offerings, and client needs to uncover potential sales opportunities and adapt approaches as needed.
- Work closely with logistics and marketing teams to ensure smooth client onboarding, satisfaction, and retention.
- Develop a comprehensive understanding of Universal Transit's services, technology, and industry practices to inform and advise clients effectively.
Qualifications and Skills
- Minimum of 2 years of experience in a sales role within the U.S. market, preferably in a service-based industry.
- Fluent English speaker with C2-level proficiency and a clear, near-American accent.
- Proven ability to generate leads, close deals, and establish strong client relationships.
- Excellent verbal and written communication skills focused on clarity and building client rapport.
- Strong organizational and multitasking abilities, capable of managing multiple leads and client requests simultaneously.
- Self-motivated with a proactive approach to achieving sales goals.
- Dedicated to understanding customer needs and providing outstanding service.
- Sales-driven and persistent in pursuing new business opportunities.
- Skilled at building and maintaining professional relationships.
- Analytical and able to assess client needs and recommend suitable solutions.
- Adaptable and flexible in approach to meet diverse client needs and shifting market demands.
Counsell potential Students over a call to inform them about a product and help them in making better career decisions.
● Answer questions about our products or the company.
● Sale as per requirement from the students.
● Maintain Sales Report and update in the CRM time to time.
Job Requirements
● 6 months - 2 years of work experience in sales/business development/voice process.
● Excellent interpersonal and communication skills.
● Strong negotiation skills with a proven ability to seek, create, negotiate, and close a deal.
● Solution-oriented with effective problem-solving skills.
Candidates should have
- Excellent communication skills
- 2-3 years in any industry with operations execution knowledge
- The ability to work in a demanding startup environment
- Interact with Vendors, customers, and other stakeholders to ensure all bookings are serviced
- Bring on board fresh vendors and increase fleet capacity
- Liaison between vendors and Finance team to ensure invoices are cleared on time
- Followup with the customer immediately after every ride to recover any extras
- Can-do attitude to ensure the entire operations go smoothly.
Hiring PF ESIC Medical Check-up Time Keeping Policies Government Compliances arranging Interview Hiring PF ESIC Medical Check-up Time Keeping Policies Government Compliances arranging Interview
Our client focuses on providing solutions in terms of data, analytics, decisioning and automation. They focus on providing solutions to the lending lifecycle of financial institutions and their products are designed to focus on systemic fraud prevention, risk management, compliance etc.
Our client is a one stop solution provider, catering to the authentication, verification and diligence needs of various industries including but not limited to, banking, insurance, payments etc.
Headquartered in Mumbai, our client was founded in 2015 by a team of three veteran entrepreneurs, two of whom are chartered accountants and one is a graduate from IIT, Kharagpur. They have been funded by tier 1 investors and have raised $1.1M in funding.
What you will do:
- Developing a deep understanding of our vast data sources on the web and knowing exactly how, when, and which data to scrap, parse and store
- Working closely with Database Administrators to store data in SQL and NoSQL databases
- Developing frameworks for automating and maintaining constant flow of data from multiple sources
- Working independently with little supervision to research and test innovative solutions skills
Desired Candidate Profile
What you need to have:- Bachelor/ Master’s degree in Computer science/ Computer Engineering/ Information Technology
- 1 - 5 years of relevant experience
- Strong coding experience in Python (knowledge of Java, JavaScript is a plus)
- Experience with SQL and NoSQL databases
- Experience with multi-processing, multi-threading, and AWS/Azure
- Strong knowledge of scraping frameworks such as Python (Request, Beautiful Soup), Web Harvest and others
- In depth knowledge of algorithms and data structures & previous experience with web crawling is a must
EarlySalary is an instant line of credit to young working Indians. EarlySalary is a pioneer in introducing
instant loans & salary advances and has disbursed over Rs.5,000Cr worth of loans on its platform and
currently disbursed nearly 100,000 loans a month and is considered one of the largest Digital
Consumer FinTech Lender in the country. EarlySalary is a series C funded start-up and has raised
multiple rounds of investment from global investors including Eight Roads (Fidelity) Ventures &
Chiratae (IDG) Ventures and is considered one of the fastest-growing FinTech start-ups in India. With
a clear focus to disrupt the finance and banking domain, EarlySalary has built a strong team focused
on build Technology, mobile platform, Risk & AI/ML models to better decisioning and real-time
lending. As a full-stack lender we manage both sides of the business building better technology and
products for powering our lending platform and build ML models for better risk mitigation and giving
real-time decisions to our customers.
Our ML & Risk Analytics Stack & Practice is focused on building a better Risk Score Card & building a
high amount of automation. EarlySalary is considered one of the fastest & most automated lenders
in the Industry
Job Title : Collections Strategy Manager
Experience : 5 - 7 years
Job location : Pune
Core Responsibilities :
• Strong in process development and policy development.
• To simplify and define processes and think over traditional collections practises to evaluate
the right strategy to collect from each account.
• Plugging gaps in the collection process to mitigate loss.
• To manage and understand collection system and optimise the productivity.
• Track payment pattern of customers, do behavioural segmentation, compare past
repayment and identify push strategies to enhance customer experience and optimise
productivity.
• Bring in new age technology , advanced analytics and best practises and keep changing the
way successful collections shops operate.
• To automate and create process that is also cost effective.
• To make use of new digital channels to engage with customer and create a more engaging
experience and scale up connectivity.
• Work closely with collection team on strategy implementation and execution.
Preference and Experience :
• System development & policy development experience from collections background from NBFC or Banking
Academic qualifications :
• Graduate /MBA preferred
About us :
BharatPe was co-founded by Ashneer Grover and Shashvat Nakrani in 2018 with the vision to make financial inclusion a reality for Indian merchants.
BharatPe launched India's first UPI interoperable QR code, first ZERO MDR payment acceptance service, and first UPI payment backed merchant cash advance service.
In 2020, post-Covid, BharatPe also launched India's only ZERO MDR card acceptance terminals - BharatSwipe. Currently serving over 50 lakh merchants across 35 cities, the company has grown business 30x in 2019 and is a leader in UPI offline transactions, having processed 5 crore+ UPI transactions a month (annualized TPV of US$ 5+ Bn).
INVESTORS :
- The company's list of marquee investors includes Beenext, Sequoia, SteadView Capital, Ribbit Capital, Coatue Management LLC, Insight Partners, and Amplo.The company has already facilitated disbursement of over Rs. 500 crores to its merchants since launch.
- Total Funding : $283.5M
- Latest Funding : Series D
- Why Join BharatPe ?
- Quality of work : Our current tech setup is serving us well for now. But, at our pace of growth, we know we will outgrow it soon. So, we are rebuilding our tech stack from scratch. Be it infrastructure as code, containerization and orchestration, event driven microservices, stream processing, or intrusion kill chain, we have a clear roadmap as well as prior experience of building modern state-of-the-art technology architecture. This is an excellent opportunity to build a world class tech setup from scratch and take it to scale.
- High operating freedom : We believe in ownership and accountability rather than command and control. Our teams are structurally empowered to operate with a lot of freedom, with effective mechanisms built within the teams to help with sound yet quick decision making.
- Experimentation and risk taking is encouraged to achieve ambitious goals.
- Competitive compensation.
- Cash and high growth equity.
- We pay good cash and our equity is growing at a very fast pace.
- We have a good team, proven business, and ample capital. Thus, the downside risk is low and upside potential is high.
- We are accelerating full throttle.
- This is the best stage to join a startup.
- Positive culture.
- We believe new ideas can come from everywhere, so we are always on the lookout. Instead of carrot and stick approach, we appeal to internal motivation to excel.
- We are strong team players - we build on each other's strengths and have each other's back in failures.
Requirements :
- Innovative and self-motivated with passion to develop complex and scalable applications.
- 2-5 years of experience in software development with strong focus on algorithms and data structures.
- Strong coding and design skills with prior experience in developing scalable & high availability applications using Core Java/J2EE, Spring, Hibernate.
- Work experience with Relational databases is required (Primarily MySQL).
- Prior work experience with Non-Relational databases (primarily Redis, MongoDb) is an added plus.
- Strong Analytical and Problem Solving Skills.
- B Tech/BCA from IIT or BE in computer science from a top REC/NIT.
1. End to End responsibility of multiple product and features 2. Coordinating with the engineering department (Tech Team) to deliver functional solutions 3. Suggesting product enhancements to improve user experience 4. Performing quality assurance controls on products 5. Conduct research to identify customer needs and market gaps 6. Prioritize the implementation of new features and set specific timelines 7. Liaise with the Marketing department to ensure proper advertisement and positioning of new products 8. Monitor and report on users’ reactions after launching 9. Create support and training documents for internal and external users
Perfect Fit candidate for the above role?
1. Academic background: Engineering Only. Non-Engineers are not eligible to apply for this role. 2. Familiarity with market research, consumers’ behavior and marketing techniques. 3. Candidates who have run their own start-up, worked in a start-up or were co-founder in another startup are preferred although not a necessity. 4. Strong time management skills 5. Good communication skills along with the ability to effectively collaborate with cross functional teams.