● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
About Leading StartUp Focused On Employee Growth
Similar jobs
Experience: 1- 5 Years
Job Location: WFH
No. of Position: Multiple
Qualifications: Ph.D. Must have
Work Timings: 1:30 PM IST to 10:30 PM IST
Functional Area: Data Science
NextGen Invent is currently searching for Data Scientist. This role will directly report to the VP, Data Science in Data Science Practice. The person will work on data science use-cases for the enterprise and must have deep expertise in supervised and unsupervised machine learning, modeling and algorithms with a strong focus on delivering use-cases and solutions at speed and scale to solve business problems.
Job Responsibilities:
- Leverage AI/ML modeling and algorithms to deliver on use cases
- Build modeling solutions at speed and scale to solve business problems
- Develop data science solutions that can be tested and deployed in Agile delivery model
- Implement and scale-up high-availability models and algorithms for various business and corporate functions
- Investigate and create experimental prototypes that work on specific domains and verticals
- Analyze large, complex data sets to reveal underlying patterns, and trends
- Support and enhance existing models to ensure better performance
- Set up and conduct large-scale experiments to test hypotheses and delivery of models
Skills, Knowledge, Experience:
- Must have Ph.D. in an analytical or technical field (e.g. applied mathematics, computer science)
- Strong knowledge of statistical and machine learning methods
- Hands on experience on building models at speed and scale
- Ability to work in a collaborative, transparent style with cross-functional stakeholders across the organization to lead and deliver results
- Strong skills in oral and written communication
- Ability to lead a high-functioning team and develop and train people
- Must have programming experience in SQL, Python and R
- Experience conceiving, implementing and continually improving machine learning projects
- Strong familiarity with higher level trends in artificial intelligence and open-source platforms
- Experience working with AWS, Azure, or similar cloud platform
- Familiarity with visualization techniques and software
- Healthcare experience is a plus
- Experience in Kafka, Chatbot and blockchain is a plus.
As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.
Basic Qualifications
• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.
• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions
• Statistical programming software experience in SPSS and comfortable working with large data sets.
• R, Python, SAS & SQL are preferred but not a mandate
• Excellent time management skills
• Good written and verbal communication skills; understanding of both written and spoken English
• Strong interpersonal skills
• Ability to act autonomously, bringing structure and organization to work
• Creative and action-oriented mindset
• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged
• Ability to work under pressure and deliver on tight deadlines
Qualifications and Experience:
• Graduate degree in: Statistics/Economics/Econometrics/Computer
Science/Engineering/Mathematics/MBA (with a strong quantitative background) or
equivalent
• Strong track record work experience in the field of business intelligence, market
research, and/or Advanced Analytics
• Knowledge of data collection methods (focus groups, surveys, etc.)
• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,
and MS Office (Excel, PowerPoint, Word)
• Strong analytical and critical thinking skills
• Industry experience in Consumer Experience/Healthcare a plus
Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur
Notice period: Immediate - 15 days
1. Python Developer with Snowflake
Job Description :
- 5.5+ years of Strong Python Development Experience with Snowflake.
- Strong hands of experience with SQL ability to write complex queries.
- Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
- Development of Data Analysis, Data Processing engines using Python
- Good Experience in Data Transformation using Python.
- Experience in Snowflake data load using Python.
- Experience in creating user-defined functions in Snowflake.
- Snowsql implementation.
- Knowledge of query performance tuning will be added advantage.
- Good understanding of Datawarehouse (DWH) concepts.
- Interpret/analyze business requirements & functional specification
- Good to have DBT, FiveTran, and AWS Knowledge.
Internshala is a dot com business with the heart of dot org.
We are a technology company on a mission to equip students with relevant skills & practical exposure through internships, fresher jobs, and online trainings. Imagine a world full of freedom and possibilities. A world where you can discover your passion and turn it into your career. A world where your practical skills matter more than your university degree. A world where you do not have to wait till 21 to taste your first work experience (and get a rude shock that it is nothing like you had imagined it to be). A world where you graduate fully assured, fully confident, and fully prepared to stake a claim on your place in the world.
At Internshala, we are making this dream a reality!
👩🏻💻 Your responsibilities would include-
- Designing, implementing, testing, deploying, and maintaining stable, secure, and scalable data engineering solutions and pipelines in support of data and analytics projects, including integrating new sources of data into our central data warehouse, and moving data out to applications and affiliates
- Developing analytical tools and programs that can help in Analyzing and organizing raw data
- Evaluating business needs and objectives
- Conducting complex data analysis and report on results
- Collaborating with data scientists and architects on several projects
- Maintaining reliability of the system and being on-call for mission-critical systems
- Performing infrastructure cost analysis and optimization
- Generating architecture recommendations and the ability to implement them
- Designing, building, and maintaining data architecture and warehousing using AWS services.
- ETL optimization, designing, coding, and tuning big data processes using Apache Spark, R, Python, C#, and/or similar technologies.
- Disaster recovery planning and implementation when it comes to ETL and data-related services
- Define actionable KPIs and configure monitoring/alerting
🍒 You will get-
- A chance to build and lead an awesome team working on one of the best recruitment and online trainings products in the world that impact millions of lives for the better
- Awesome colleagues & a great work environment
- Loads of autonomy and freedom in your work
💯 You fit the bill if-
- You have the zeal to build something from scratch
- You have experience in Data engineering and infrastructure work for analytical and machine learning processes.
- You have experience in a Linux environment and familiarity with writing shell scripts using Python or any other scripting language
- You have 3-5 years of experience as a Data Engineer or similar software engineering role
● Our Infrastructure team is looking for an excellent Big Data Engineer to join a core group that
designs the industry’s leading Micro-Engagement Platform. This role involves design and
implementation of architectures and frameworks of big data for industry’s leading intelligent
workflow automation platform. As a specialist in Ushur Engineering team, your responsibilities will
be to:
● Use your in-depth understanding to architect and optimize databases and data ingestion pipelines
● Develop HA strategies, including replica sets and sharding to for highly available clusters
● Recommend and implement solutions to improve performance, resource consumption, and
resiliency
● On an ongoing basis, identify bottlenecks in databases in development and production
environments and propose solutions
● Help DevOps team with your deep knowledge in the area of database performance, scaling,
tuning, migration & version upgrades
● Provide verifiable technical solutions to support operations at scale and with high availability
● Recommend appropriate data processing toolset and big data ecosystems to adopt
● Design and scale databases and pipelines across multiple physical locations on cloud
● Conduct Root-cause analysis of data issues
● Be self-driven, constantly research and suggest latest technologies
The experience you need:
● Engineering degree in Computer Science or related field
● 10+ years of experience working with databases, most of which should have been around
NoSql technologies
● Expertise in implementing and maintaining distributed, Big data pipelines and ETL
processes
● Solid experience in one of the following cloud-native data platforms (AWS Redshift/ Google
BigQuery/ SnowFlake)
● Exposure to real time processing techniques like Apache Kafka and CDC tools
(Debezium, Qlik Replicate)
● Strong experience in Linux Operating System
● Solid knowledge of database concepts, MongoDB, SQL, and NoSql internals
● Experience with backup and recovery for production and non-production environments
● Experience in security principles and its implementation
● Exceptionally passionate about always keeping the product quality bar at an extremely
high level
Nice-to-haves
● Proficient with one or more of Python/Node.Js/Java/similar languages
Why you want to Work with Us:
● Great Company Culture. We pride ourselves on having a values-based culture that
is welcoming, intentional, and respectful. Our internal NPS of over 65 speaks for
itself - employees recommend Ushur as a great place to work!
● Bring your whole self to work. We are focused on building a diverse culture, with
innovative ideas where you and your ideas are valued. We are a start-up and know
that every person has a significant impact!
● Rest and Relaxation. 13 Paid leaves, wellness Fridays offs (aka a day off to care
for yourself- every last Friday of the month), 12 paid sick Leaves, and more!
● Health Benefits. Preventive health checkups, Medical Insurance covering the
dependents, wellness sessions, and health talks at the office
● Keep learning. One of our core values is Growth Mindset - we believe in lifelong
learning. Certification courses are reimbursed. Ushur Community offers wide
resources for our employees to learn and grow.
● Flexible Work. In-office or hybrid working model, depending on position and
location. We seek to create an environment for all our employees where they can
thrive in both their profession and personal life.
We at Datametica Solutions Private Limited are looking for SQL Engineers who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.
Job Description
Experience : 4-10 years
Location : Pune
Mandatory Skills -
- Strong in ETL/SQL development
- Strong Data Warehousing skills
- Hands-on experience working with Unix/Linux
- Development experience in Enterprise Data warehouse projects
- Good to have experience working with Python, shell scripting
Opportunities -
- Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume and Kafka
- Would get chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
- Will play an active role in setting up the Modern data platform based on Cloud and Big Data
- Would be part of teams with rich experience in various aspects of distributed systems and computing
About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.
We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.
Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.
We have our own products!
Eagle – Data warehouse Assessment & Migration Planning Product
Raven – Automated Workload Conversion Product
Pelican - Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.
Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.
Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy
Check out more about us on our website below!
http://www.datametica.com/">www.datametica.com
- Desire to explore new technology and break new ground.
- Are passionate about Open Source technology, continuous learning, and innovation.
- Have the problem-solving skills, grit, and commitment to complete challenging work assignments and meet deadlines.
Qualifications
- Engineer enterprise-class, large-scale deployments, and deliver Cloud-based Serverless solutions to our customers.
- You will work in a fast-paced environment with leading microservice and cloud technologies, and continue to develop your all-around technical skills.
- Participate in code reviews and provide meaningful feedback to other team members.
- Create technical documentation.
- Develop thorough Unit Tests to ensure code quality.
Skills and Experience
- Advanced skills in troubleshooting and tuning AWS Lambda functions developed with Java and/or Python.
- Experience with event-driven architecture design patterns and practices
- Experience in database design and architecture principles and strong SQL abilities
- Message brokers like Kafka and Kinesis
- Experience with Hadoop, Hive, and Spark (either PySpark or Scala)
- Demonstrated experience owning enterprise-class applications and delivering highly available distributed, fault-tolerant, globally accessible services at scale.
- Good understanding of distributed systems.
- Candidates will be self-motivated and display initiative, ownership, and flexibility.
Preferred Qualifications
- AWS Lambda function development experience with Java and/or Python.
- Lambda triggers such as SNS, SES, or cron.
- Databricks
- Cloud development experience with AWS services, including:
- IAM
- S3
- EC2
- AWS CLI
- API Gateway
- ECR
- CloudWatch
- Glue
- Kinesis
- DynamoDB
- Java 8 or higher
- ETL data pipeline building
- Data Lake Experience
- Python
- Docker
- MongoDB or similar NoSQL DB.
- Relational Databases (e.g., MySQL, PostgreSQL, Oracle, etc.).
- Gradle and/or Maven.
- JUnit
- Git
- Scrum
- Experience with Unix and/or macOS.
- Immediate Joiners
Nice to have:
- AWS / GCP / Azure Certification.
- Cloud development experience with Google Cloud or Azure
- 5+ years of experience in a Data Engineer role
- Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases such as Cassandra.
- Experience with AWS cloud services: EC2, EMR, Athena
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with unstructured datasets.
- Deep problem-solving skills to perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Top MNC looking for candidates on Business Analytics(4-8 Years Experience).
Requirement :
- Experience in metric development & Business analytics
- High Data Skill Proficiency/Statistical Skills
- Tools: R, SQL, Python, Advanced Excel
- Good verbal/communication Skills
- Supply Chain domain knowledge
*Job Summary*
Duration: 6months contract based at Hyderabad
Availability: 1 week/Immediate
Qualification: Graduate/PG from Reputed University
*Key Skills*
R, SQL, Advanced Excel, Python
*Required Experience and Qualifications*
5 to 8 years of Business Analytics experience.