Skills: SQL, DBT, Airflow, Python, Golang, PostgreSQL, Clickhouse, BigQuery, Kafka/RabbitMQ, Data Scraping, System Design, ETLs, REST.
Work Location: Gurgaon/Bangalore/Remote.
We are seeking an experienced Analytics Engineer to join our team. The ideal candidate will have a strong background in Python, SQL, dbt, Airflow, and core database concepts. As an Analytics Engineer, you will be responsible for managing our data infrastructure, which includes building and maintaining robust API integrations with third-party data providers, designing and implementing data pipelines that run on schedule, and working with business, product, and engineering teams to deliver high-quality data products.
Key Responsibilities:
- Design, build, and maintain robust API integrations with third-party data providers.
- Develop and maintain data pipelines using Python, dbt, and Airflow.
- Collaborate with business, product, and engineering teams to deliver high-quality data products.
- Monitor and optimize data pipelines to ensure they run on schedule and with high performance.
- Stay up-to-date with the latest developments in data infrastructure and analytics technologies.
- Troubleshoot and resolve data pipeline and integration issues as they arise.
Qualifications:
- Strong experience with SQL, dbt, Airflow, and core database concepts.
- Experience with building and maintaining API integrations with third-party data providers.
- Experience with designing and implementing data pipelines.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration skills.
- Experience with big data warehouses like BigQuery, Redshift, or Clickhouse.
- Experience with data visualization and analysis tools such as Metabase, Tableau or Power BI is a plus.
- Familiarity with cloud platforms such as AWS or GCP is a plus.
If you are passionate about data infrastructure and analytics and are excited about the opportunity to work with a talented team to deliver high-quality data products, we want to hear from you! Apply now to join our team as an Analytics Engineer.
About Recro
Recro is a developer-focused platform that was founded with the aim of seamlessly matching individual expertise with the right opportunities.
We empower talented developers by providing them with relevant experience at fast-growing startups based on technical competencies and aspirations. These opportunities have a significant impact on their career success and help them become their best self.
On the other hand, startups get instant access to top-quality developers with guaranteed productivity from the very beginning. We help them to scale up/down based on their needs, thus ensuring an efficient and high-yielding workforce.
Developers solve real-time complex problems and get exposure to the uplifting and challenging work culture at start-ups like Flipkart, Dunzo, Swiggy, and Zivame among many others. At Recro, we ensure continuous support from our strong community to accelerate careers for developers and strive to create optimal business outcomes for high-growth startups.
Similar jobs
About the company:
VakilSearch is a technology-driven platform, offering services that cover the legal needs of startups and established businesses. Some of our services include incorporation, government registrations & filings, accounting, documentation and annual compliances. In addition, we offer a wide range of services to individuals, such as property agreements and tax filings. Our mission is to provide one-click access to individuals and businesses for all their legal and professional needs.
You can learn more about us at vakilsearch.com .
About the role:
A successful data analyst needs to have a combination of technical as well leadership skills. A background in Mathematics, Statistics, Computer Science, Information Management can serve as a solid foundation to build your career as a data analyst at VakilSearch.
Why to join Vakilsearch:
- Unlimited opportunities to grow
- Flat hierarchy
- Encouraging environment to unleash your out of box thinking skills
Responsibilities:
- Preparing reports for the stakeholders and the management, enabling them to take important decisions based on various facts and trends.
- Using automated tools to extract data from primary and secondary sources
- Identify and recommend the right product metrics to be analysed and tracked for every feature/problem statement.
- Using statistical tools to identify, analyze, and interpret patterns and trends in complex data sets that could be helpful for the diagnosis and prediction
- Working with programmers, engineers, and management heads to identify process improvement opportunities, propose system modifications, and devise data governance strategies.
Required skills:
- Bachelor’s degree from an accredited university or college in computer science or graduate from data science related program
- Minimum of 0 - 2 years experience in analysing
About Us :
-
RaRa Now is revolutionizing instant delivery for e-commerce in Indonesia through data-driven logistics.
-
RaRa Now is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimization technology. RaRa makes it possible for anyone, anywhere to get same-day delivery in Indonesia. While others are focusing on - one-to-one- deliveries, the company has developed proprietary, real-time batching tech to do - many-to-many- deliveries within a few hours. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan, and many more.
-
We are a distributed team with a company headquartered in Singapore, core operations in Indonesia, and a technology team based out of India.
Future of eCommerce Logistics :
-
Data-driven logistics company that is bringing in a same-day delivery revolution in Indonesia
-
Revolutionizing delivery as an experience
-
Empowering D2C Sellers with logistics as the core technology
About the Role :
- Writing scalable, robust, testable, efficient, and easily maintainable code
- Translating software requirements into stable, working, high performance software
- Playing a key role in architectural and design decisions, building toward an efficient microservices distributed architecture.
- Strong knowledge of Go programming language, paradigms, constructs, and idioms
- Knowledge of language patterns such as - Goroutine and Channels
- Experience with the full site of Go frameworks and tools, including :
- Dependency management tools such as Godep.
- Popular Go web frameworks, such as Echo
- Request routing and API mechanisms
- Ability to write clean and effective Godoc comments
- Familiarity with code versioning tools - primarily Git.
- A basic understanding of computing and Linux systems
- Basic knowledge of Systems Engineering
- Memory management and pointers, specifically in Golang
- Implement Docker for smaller-scale applications that require simpler deployments
- Employ Linux Terminal command structures to allow easy back-end operations for less-expert technical staff
- Structure our user interface with React and ensure REST API access is available for enterprise-grade finance customers on-demand
Job Summary
As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base
- Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
- Work with teams of smart collaborators. Be responsible for their appraisals and career development.
- Participate and lead executive presentations with client leadership stakeholders.
- Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
- See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.
Role & Responsibilities
- Serve as expert in Data Science, build framework to develop Production level DS/AI models.
- Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
- Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
- Lead and manage the onsite-offshore relation, at the same time adding value to the client.
- Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
- Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
- Present results, insights, and recommendations to senior management with an emphasis on the business impact.
- Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
- Lead or contribute to org level initiatives to build the Tredence of tomorrow.
Qualification & Experience
- Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
- 6-10+ years of experience in data science, building hands-on ML models
- Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
- Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
- Knowledge of programming languages SQL, Python/ R, Spark.
- Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
- Experience with cloud computing services (AWS, GCP or Azure)
- Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
- Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
- Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
- Knowledge in GPU code optimization, Spark MLlib Optimization.
- Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
- Experience with ML CI/CD pipelines.
Experience: 9-12 years
Location: Bangalore
Job Description
Strong Experience across Applications Migration to Cloud, Cloud native Architecture, Amazon EKS, Serverless (Lambda).
Delivery of customer Cloud Strategies aligned with customers business objectives and with a focus on Cloud Migrations and App Modernization.
Design of clients Cloud solutions with a focus on AWS.
Undertake short-term delivery engagements related to cloud architecture with a specific focus on AWS and Cloud Migrations/Modernization.
Provide leadership in migration and modernization methodologies and techniques including mass application movements into the cloud.
Implementation of AWS within in large regulated enterprise environments.
Nurture Cloud computing expertise internally and externally to drive Cloud Adoption.
Work with designers and developers in the team to guide them through the solution implementation.
Participate in performing Proof of Concept (POC) for various upcoming technologies to fit in business requirement.• Job Location:- Opp. Sola over bridge, Ahmedabad
• Education:- B.E./ B. Tech./ M.E./ M. Tech/ MCA
• Desired Skills:- .Net 4.5 and later, Asp.Net MVC, C#, Java script, JQuery, CSS3, Web API, ANGULAR JS, .Net Core, Sql Server, Mysql, Entity Framework
• Experience:- 01yrs to 04yrs
• Number of Vacancy:- 02
• 5 Days working
• Job Timing:- 10am to 7:30pm
Roles & Responsibility:-
• Producing clean, efficient code based on specifications.
• Fixing and improving existing software.
• Integrate software components and third-party programs.
• Verify and deploy programs and systems.
• Troubleshoot, debug and upgrade existing software.
• Gather and evaluate user feedback.
• Recommend and execute improvements.
• Create technical documentation for reference and reporting
Job Requirement:-
• Must have good experience on Asp.Net, MCV, C#, Java script, Jquery etc.
• Experience with software design and development in a test-driven environment.
• Knowledge of coding languages (e.g. C#,VB, JavaScript) and frameworks/systems (e.g. AngularJS, Git).
• Experience with databases and Object-Relational Mapping (ORM).
• Ability to learn new languages and technologies.
• Excellent communication skills.
Regards,
Vimal Patel
Job Description - Sr Azure Data Engineer
Roles & Responsibilities:
- Hands-on programming in C# / .Net,
- Develop serverless applications using Azure Function Apps.
- Writing complex SQL Queries, Stored procedures, and Views.
- Creating Data processing pipeline(s).
- Develop / Manage large-scale Data Warehousing and Data processing solutions.
- Provide clean, usable data and recommend data efficiency, quality, and data integrity.
Skills
- Should have working experience on C# /.Net.
- Proficient with writing SQL queries, Stored Procedures, and Views
- Should have worked on Azure Cloud Stack.
- Should have working experience ofin developing serverless code.
- Must have MANDATORILY worked on Azure Data Factory.
Experience
- 4+ years of relevant experience
Data Engineer – SQL, RDBMS, pySpark/Scala, Python, Hive, Hadoop, Unix
Data engineering services required:
- Build data products and processes alongside the core engineering and technology team;
- Collaborate with senior data scientists to curate, wrangle, and prepare datafor use in their advanced analytical models;
- Integrate datafrom a variety of sources, assuring that they adhere to data quality and accessibility standards;
- Modify and improve data engineering processes to handle ever larger, more complex, and more types of data sources and pipelines;
- Use Hadoop architecture and HDFS commands to design and optimize data queries at scale;
- Evaluate and experiment with novel data engineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases.
Big data engineering skills:
- Demonstrated ability to perform the engineering necessary to acquire, ingest, cleanse, integrate, and structure massive volumes of data from multiple sources and systems into enterprise analytics platforms;
- Proven ability to design and optimize queries to build scalable, modular, efficient data pipelines;
- Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets;
- Proven experience delivering production-ready data engineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance;
- Ability to operate with a variety of data engineering tools and technologies
Work Location : Chennai
Experience Level : 5+yrs
Package : Upto 18 LPA
Notice Period : Immediate Joiners
It's a full-time opportunity with our client.
Mandatory Skills:Machine Learning,Python,Tableau & SQL
Job Requirements:
--2+ years of industry experience in predictive modeling, data science, and Analysis.
--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.
--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.
--Experience writing code in Python and SQL with documentation for reproducibility.
--Strong Proficiency in Tableau.
--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.
--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.
--AWS Sagemaker experience is a plus not required.
Who We Are
Getir is a technology company, a pioneer of the online ultra-fast grocery delivery service business, that has transformed the way in which millions of people across the world consume groceries. We believe in a world where getting everything you need, when you need it, sustainably, is the new normal.
Getir is growing incredibly fast in Europe, but we want to grow globally. From London to Tokyo, Sao Paulo to New York, our global ambitions can only be accomplished with exceptional technology.
If you've got the experience and the ambition to be a Database Administrator (Postgres) at Getir and a founding part of our technology hub in Bangalore, please apply.
What you’ll be doing:
- Work with engineering and other teams to build & maintain our database requirements and answer big data questions
- Use your past DBA experience and industry wide best practices to scale and optimize the database services.
- Regularly conduct database health monitoring and diagnostics
- Create processes to ensure data integrity and identify potential data errors.
- Document and update procedures and processes.
- Troubleshoot and resolve problems as they arise
What we look for in you:
- You have A Bachelor’s degree in Computer Science, Computer Engineering, Data Science, or another related field.
- 3+ years of experience as a Database Administrator
- Proficiency administering PostgreSQL
- Extensive experience performing general troubleshooting database maintenance activities including backup and recovery, capacity planning, and managing user accounts.
- Experience in identifying and documenting risk areas and mitigation strategies for process and procedure activities.
- Experience in managing schemas, indexing, objects, and partitioning the tables.
- Experience in managing system configurations.
- Experience creating data design models, database architecture, and data repository design.
- Strong understanding of SQL tuning and optimization of query plans.
- Linux shell scripting skills and experience with production Linux environments
- Experience working with software engineers in a highly technical environment.
- Knowledge of 1+ programming language (e.g. C++, Scala, Java, JavaScript etc.)
- Excellent verbal and written communication skills.
- Knowledge administering MongoDB (Good to have)
- Knowledge administering Amazon RDS for PostgreSQL & Redshift. (Good to have)