About RARA NOW :
-
RaRa Now is revolutionizing instant delivery for e-commerce in Indonesia through data-driven logistics.
-
RaRa Now is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimization technology. RaRa makes it possible for anyone, anywhere to get same-day delivery in Indonesia. While others are focusing on - one-to-one- deliveries, the company has developed proprietary, real-time batching tech to do - many-to-many- deliveries within a few hours. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan, and many more.
-
We are a distributed team with the company headquartered in Singapore, core operations in Indonesia, and a technology team based out of India.
Future of eCommerce Logistics :
-
Data driven logistics company that is bringing in same-day delivery revolution in Indonesia
-
Revolutionizing delivery as an experience
-
Empowering D2C Sellers with logistics as the core technology
About the Role :
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Prior experience on working on Big Query, Redshift or other data warehouses
About RaRa Now
RaRa Now revolutionizing Instant and Same-day delivery through tech-innovation for the safest, fastest, and most affordable delivery service.
Similar jobs
companies uncover the 3% of active buyers in their target market. It evaluates
over 100 billion data points and analyzes factors such as buyer journeys, technology
adoption patterns, and other digital footprints to deliver market & sales intelligence.
Its customers have access to the buying patterns and contact information of
more than 17 million companies and 70 million decision makers across the world.
Role – Data Engineer
Responsibilities
Work in collaboration with the application team and integration team to
design, create, and maintain optimal data pipeline architecture and data
structures for Data Lake/Data Warehouse.
Work with stakeholders including the Sales, Product, and Customer Support
teams to assist with data-related technical issues and support their data
analytics needs.
Assemble large, complex data sets from third-party vendors to meet business
requirements.
Identify, design, and implement internal process improvements: automating
manual processes, optimizing data delivery, re-designing infrastructure for
greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and
loading of data from a wide variety of data sources using SQL, Elasticsearch,
MongoDB, and AWS technology.
Streamline existing and introduce enhanced reporting and analysis solutions
that leverage complex data sources derived from multiple internal systems.
Requirements
5+ years of experience in a Data Engineer role.
Proficiency in Linux.
Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
Must have experience with Python/Scala.
Must have experience with Big Data technologies like Apache Spark.
Must have experience with Apache Airflow.
Experience with data pipeline and ETL tools like AWS Glue.
Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
About Us:
6sense is a Predictive Intelligence Engine that is reimagining how B2B companies do
sales and marketing. It works with big data at scale, advanced machine learning and
predictive modelling to find buyers and predict what they will purchase, when and
how much.
6sense helps B2B marketing and sales organizations fully understand the complex ABM
buyer journey. By combining intent signals from every channel with the industry’s most
advanced AI predictive capabilities, it is finally possible to predict account demand and
optimize demand generation in an ABM world. Equipped with the power of AI and the
6sense Demand PlatformTM, marketing and sales professionals can uncover, prioritize,
and engage buyers to drive more revenue.
6sense is seeking a Staff Software Engineer and data to become part of a team
designing, developing, and deploying its customer-centric applications.
We’ve more than doubled our revenue in the past five years and completed our Series
E funding of $200M last year, giving us a stable foundation for growth.
Responsibilities:
1. Own critical datasets and data pipelines for product & business, and work
towards direct business goals of increased data coverage, data match rates, data
quality, data freshness
2. Create more value from various datasets with creative solutions, and unlocking
more value from existing data, and help build data moat for the company3. Design, develop, test, deploy and maintain optimal data pipelines, and assemble
large, complex data sets that meet functional and non-functional business
requirements
4. Improving our current data pipelines i.e. improve their performance, SLAs,
remove redundancies, and figure out a way to test before v/s after roll out
5. Identify, design, and implement process improvements in data flow across
multiple stages and via collaboration with multiple cross functional teams eg.
automating manual processes, optimising data delivery, hand-off processes etc.
6. Work with cross function stakeholders including the Product, Data Analytics ,
Customer Support teams for their enablement for data access and related goals
7. Build for security, privacy, scalability, reliability and compliance
8. Mentor and coach other team members on scalable and extensible solutions
design, and best coding standards
9. Help build a team and cultivate innovation by driving cross-collaboration and
execution of projects across multiple teams
Requirements:
8-10+ years of overall work experience as a Data Engineer
Excellent analytical and problem-solving skills
Strong experience with Big Data technologies like Apache Spark. Experience with
Hadoop, Hive, Presto would-be a plus
Strong experience in writing complex, optimized SQL queries across large data
sets. Experience with optimizing queries and underlying storage
Experience with Python/ Scala
Experience with Apache Airflow or other orchestration tools
Experience with writing Hive / Presto UDFs in Java
Experience working on AWS cloud platform and services.
Experience with Key Value stores or NoSQL databases would be a plus.
Comfortable with Unix / Linux command line
Interpersonal Skills:
You can work independently as well as part of a team.
You take ownership of projects and drive them to conclusion.
You’re a good communicator and are capable of not just doing the work, but also
teaching others and explaining the “why” behind complicated technical
decisions.
You aren’t afraid to roll up your sleeves: This role will evolve over time, and we’ll
want you to evolve with it
About our Client :-
Our Client is a global data and measurement-driven media agency whose mission is to make brands more valuable to the world. Clients include Google, Flipkart, NBCUniversal, L'Oréal and the Financial Times. The agency is more than 2,000 people strong, manages $4.5B in annualized media spend, and deploys campaigns in 121 markets via 22 offices in APAC, EMEA and the Americas.
About the role :-
Accountable for quantifying and measuring the success of our paid media campaigns and for delivering insights that enable us to innovate the work we deliver at MFG. Leading multi-product projects, developing best practices, being the main point of contact for other teams and direct line management for multiple team members.
Some of the things we’d like you to do -
● Build a deep understanding of marketing plans and their objectives to help Account teams (Activation, Planning, etc) build comprehensive measurement, and test & learn plans
● Play an instrumental role in evolving and designing new, innovative measurement tools. Managing the process through to delivery and take ownership of global roll out
● Recruit, manage and mentor analytical resource(s), ensuring the efficient flow of work through the team, the timely delivery of high-quality outputs and their continuing development as professionals
● Lead the creation of clear, robust and thought-provoking campaign reviews and insights
● Work with Account teams (Activation, Planning, etc) to help define the correct questions to understand correct metrics for quantifying campaign performance
● To help deliver “best in class” analytical capabilities across the agency with the wider Analytics team, including the use of new methods, techniques, tools and systems
● Develop innovative marketing campaigns and assist clients to define objectives
● Develop deep understanding of marketing platform testing and targeting abilities, and act in a consultative capacity in their implementation
● Provide hands-on leadership, mentorship, and coaching in the expert delivery of data strategies, AdTech solutions, audiences solutions and data management solutions to our clients
● Leading stakeholder management on certain areas of the client portfolio
● Coordination and communication with 3rd party vendors to critically assess new/bespoke measurement solutions. Includes development and management of contracts and SOWs.
A bit about yourself -
● 8+ years of experience in a data & insight role; practical experience on how analytical techniques/models are used in marketing. Previous agency, media, or consultancy background is desirable.
● A proven track record in working with a diverse array of clients to solve complex problems and delivering demonstrable business success. Including (but not limited to) the development of compelling and sophisticated data strategies and AdTech / martech strategies to enable
marketing objectives.
● Ideally you have worked with Ad Platforms, DMPs, CDPs, Clean Rooms, Measurement Platforms, Business Intelligence Tools, Data Warehousing and Big Data Solutions to some degree
● 3+ years of management experience and ability to delegate effectively
● Proficiency with systems such as SQL, Social Analytics tools, Python, and ‘R’
● Understand measurement for both Direct Response and Brand Awareness campaigns desired
● Excellent at building and presenting data in a visually engaging and insightful manner that cuts through the noise
● Strong organizational and project management skills including team resourcing
● Strong understanding of what data points can be collected and analyzed in a digital campaign, and how each data point should be analyzed
● Established and professional communication, presentation, and motivational skills
- Graduate+ in Mathematics, Statistics, Computer Science, Economics, Business, Engineering or equivalent work experience.
- Total experience of 5+ years with at least 2 years in managing data quality for high scale data platforms.
- Good knowledge of SQL querying.
- Strong skill in analysing data and uncovering patterns using SQL or Python.
- Excellent understanding of data warehouse/big data concepts such data extraction, data transformation, data loading (ETL process).
- Strong background in automation and building automated testing frameworks for data ingestion and transformation jobs.
- Experience in big data technologies a big plus.
- Experience in machine learning, especially in data quality applications a big plus.
- Experience in building data quality automation frameworks a big plus.
- Strong experience working with an Agile development team with rapid iterations.
- Very strong verbal and written communication, and presentation skills.
- Ability to quickly understand business rules.
- Ability to work well with others in a geographically distributed team.
- Keen observation skills to analyse data, highly detail oriented.
- Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution.
- Able to identify stakeholders, build relationships, and influence others to get work done.
- Self-directed and self-motivated individual who takes complete ownership of the product and its outcome.
- Expert software implementation and automated testing
- Promoting development standards, code reviews, mentoring, knowledge sharing
- Improving our Agile methodology maturity
- Product and feature design, scrum story writing
- Build, release, and deployment automation
- Product support & troubleshooting
Who we have in mind:
- Demonstrated experience as a Java
- Should have a deep understanding of Enterprise/Distributed Architecture patterns and should be able to demonstrate the relevant usage of the same
- Turn high-level project requirements into application-level architecture and collaborate with the team members to implement the solution
- Strong experience and knowledge in Spring boot framework and microservice architecture
- Experience in working with Apache Spark
- Solid demonstrated object-oriented software development experience with Java, SQL, Maven, relational/NoSQL databases and testing frameworks
- Strong working experience with developing RESTful services
- Should have experience working on Application frameworks such as Spring, Spring Boot, AOP
- Exposure to tools – Jira, Bamboo, Git, Confluence would be an added advantage
- Excellent grasp of the current technology landscape, trends and emerging technologies
Job Role : Associate Manager (Database Development)
Key Responsibilities:
- Optimizing performances of many stored procedures, SQL queries to deliver big amounts of data under a few seconds.
- Designing and developing numerous complex queries, views, functions, and stored procedures
- to work seamlessly with the Application/Development team’s data needs.
- Responsible for providing solutions to all data related needs to support existing and new
- applications.
- Creating scalable structures to cater to large user bases and manage high workloads
- Responsible in every step from the beginning stages of the projects from requirement gathering to implementation and maintenance.
- Developing custom stored procedures and packages to support new enhancement needs.
- Working with multiple teams to design, develop and deliver early warning systems.
- Reviewing query performance and optimizing code
- Writing queries used for front-end applications
- Designing and coding database tables to store the application data
- Data modelling to visualize database structure
- Working with application developers to create optimized queries
- Maintaining database performance by troubleshooting problems.
- Accomplishing platform upgrades and improvements by supervising system programming.
- Securing database by developing policies, procedures, and controls.
- Designing and managing deep statistical systems.
Desired Skills and Experience :
- 7+ years of experience in database development
- Minimum 4+ years of experience in PostgreSQL is a must
- Experience and in-depth knowledge in PL/SQL
- Ability to come up with multiple possible ways of solving a problem and deciding on the most optimal approach for implementation that suits the work case the most
- Have knowledge of Database Administration and have the ability and experience of using the CLI tools for administration
- Experience in Big Data technologies is an added advantage
- Secondary platforms: MS SQL 2005/2008, Oracle, MySQL
- Ability to take ownership of tasks and flexibility to work individually or in team
- Ability to communicate with teams and clients across time zones and global regions
- Good communication and self-motivated
- Should have the ability to work under pressure
- Knowledge of NoSQL and Cloud Architecture will be an advantage
- Hands-on programming expertise in Java OR Python
- Strong production experience with Spark (Minimum of 1-2 years)
- Experience in data pipelines using Big Data technologies (Hadoop, Spark, Kafka, etc.,) on large scale unstructured data sets
- Working experience and good understanding of public cloud environments (AWS OR Azure OR Google Cloud)
- Experience with IAM policy and role management is a plus
Location: Chennai- Guindy Industrial Estate
Duration: Full time role
Company: Mobile Programming (https://www.mobileprogramming.com/" target="_blank">https://www.
Client Name: Samsung
We are looking for a Data Engineer to join our growing team of analytics experts. The hire will be
responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing
data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline
builder and data wrangler who enjoy optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple
teams, systems and products.
Responsibilities for Data Engineer
Create and maintain optimal data pipeline architecture,
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes,
optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data
from a wide variety of data sources using SQL and AWS big data technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with
data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
Experience building and optimizing big data ETL pipelines, architectures and data sets.
Advanced working SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as working familiarity with a variety of databases.
Experience performing root cause analysis on internal and external data and processes to
answer specific business questions and identify opportunities for improvement.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and
workload management.
A successful history of manipulating, processing and extracting value from large disconnected
datasets.
Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data
stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
We are looking for a candidate with 3-6 years of experience in a Data Engineer role, who has
attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with big data tools: Spark, Kafka, HBase, Hive etc.
Experience with relational SQL and NoSQL databases
Experience with AWS cloud services: EC2, EMR, RDS, Redshift
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc.
Skills: Big Data, AWS, Hive, Spark, Python, SQL