Senior Business Analyst

at Porter.in

DP
Posted by Satyajit Mittra
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
Best in industry
icon
Full time
Skills
Python
SQL
Data Visualization
Data modeling
Responsibilities
This role requires a person to support initiatives within a business charter & accompanying products by aligning with
AM vision, understanding tactical requirements, and executing ideas. A mostly individual contributor (IC) role with
guidance support and mentoring, ownership, and high growth opportunities.
- Explore, analyze, and visualize our unique data to provide insight to stakeholders
- Design reports and dashboards to monitor metrics and add value to business
- Identify pain points in existing processes and suggest improvements backed by data
- Use existing / build new frameworks to develop & maintain ETL processes
- Design A/B tests to validate hypotheses and help design better products
- Build models to support tech products, when required
- Overall - help us build an awesome product and provide amazing user experience

Analytics Stack

- Analytics : Python / R / SQL + Excel / PPT, Colab notebooks
- Database : PostgreSQL, Amazon Redshift, DynamoDB, Aerospike
- Warehouse : Amazon Redshift
- ETL : Lots of Python + custom-made
- Business Intelligence / Visualization : Metabase + Python/R libraries (location data) + Dash
- Deployment pipeline : Docker, Jenkins, AWS Lambda
- Collaboration : Git, Dropbox Paper
Read more

About Porter.in

Using a Porter App. You can get tata ace at your doorstep in 30 mins. Faster, economical, hassle-free mini truck booking. You can choose from over 3000 trained drivers & a wide range of dedicated vehicles.


Porter is the most popular and comprehensive intra-city logistics marketplace in India. Porter's goal is to improve the quality of life for its more than 150,000 driver-partners by allowing them to earn a steady income while maintaining their independence in the $40 billion intra-city logistics market. The company now provides its services to more than 5 million customers throughout 12 cities in India, including Delhi, Mumbai, Hyderabad, Bangalore, Chennai, Pune, Ahmedabad, Kolkata, Surat, Lucknow, Jaipur, and Coimbatore.


Porter offers companies assistance with both the "last mile" and "first mile" of their delivery processes. It also provides a wide range of support services, including real-time visibility, on-demand transportation, and supply chain management. As a result, it has supported businesses in increasing their operational efficiency and reducing their logistical expenses. Porter provides solutions for all of your logistical demands that are both easy to use and very modern in terms of technology.



Read more
Founded
2014
Type
Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Machine Learning Engineer

at Contact Center software that leverages AI to improve custome

Agency job
via Qrata
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Amazon Web Services (AWS)
Python
Java
Big Data
C#
TensorFlow
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹17L - ₹40L / yr
Role: Machine Learning Engineer

As a machine learning engineer on the team, you will
• Help science and product teams innovate in developing and improving end-to-end
solutions to machine learning-based security/privacy control
• Partner with scientists to brainstorm and create new ways to collect/curate data
• Design and build infrastructure critical to solving problems in privacy-preserving machine
learning
• Help team self-organize and follow machine learning best practice.

Basic Qualifications

• 4+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• 4+ years of programming experience with at least one modern language such as Java,
C++, or C# including object-oriented design
• 4+ years of professional software development experience
• 4+ years of experience as a mentor, tech lead OR leading an engineering team
• 4+ years of professional software development experience in Big Data and Machine
Learning Fields
• Knowledge of common ML frameworks such as Tensorflow, PyTorch
• Experience with cloud provider Machine Learning tools such as AWS SageMaker
• Programming experience with at least two modern language such as Python, Java, C++,
or C# including object-oriented design
• 3+ years of experience contributing to the architecture and design (architecture, design
patterns, reliability and scaling) of new and current systems
• Experience in python
• BS in Computer Science or equivalent
Read more
Job posted by
Rayal Rajan

Data Engineer - (Azure, Logic apps, Powerapps)

at Onepoint IT Consulting Pvt Ltd

Founded 2008  •  Services  •  20-100 employees  •  Profitable
ADF
SQL
Data Warehouse (DWH)
Informatica
ETL
logicapps
powerapps
icon
Pune
icon
1 - 4 yrs
icon
₹7L - ₹15L / yr
This developer will focus on developing new features, implementing unit test and help develop forward thinking development strategies.

Is this you?

 

  • I am passionate about developing software and scripts to maintain complex Systems.
  • I have vision and talent to contribute in new emerging areas such as cloud technologies.
  • I love logic and solving puzzles.
  • I strive working with a diverse, highly skilled team based in the UK and India.
  • I am fluent in English, both written and spoken.

 

Qualifications

 

  • Experience in Azure (ADF, Logic App), Azure Functions
  • Experience with SQL and Databases.
  • Knowledge of data warehousing/ data modeling knowledge
  • Knowledge of Snowflake
  • Knowledge or experience in low code applications (e.g. PowerApp)
  • Knowledge of design patterns and Software Development Life Cycle.

 

Competencies

 

  • Excellent written and verbal communication skills in English and Hindi.
  • Excellent interpersonal skills to collaborate with various stakeholders.
  • Good reasoning and logical ability
  • Identifying the right questions and understand the big picture.
  • Constant learning which enjoys new challenges.
  • Self-Starter with excellent time management skills.

 

Benefits

Excellent work life balance, including flexible working hours within core working hours.

  • Actively encouraged in decision making at all levels.
  • Assigned mentor for self-development.
  • 18 days annual leave.
  • Medical Insurance and Provident Fund.
  • Flexible work culture
Read more
Job posted by
Maithili Shetty

Senior Data Scientist

at RandomTrees

Founded 2017  •  Products & Services  •  100-1000 employees  •  Profitable
Data Science
Machine Learning (ML)
Linear regression
Natural Language Processing (NLP)
Deep Learning
Predictive analytics
Python
Data Visualization
classification
anomaly detection
icon
Remote only
icon
5 - 10 yrs
icon
₹15L - ₹30L / yr

Work Timings:4:00PM to 11:30PM
Fulltime WFH
6+ Yrs in Data science
Strong Experience ML Regression, Classification, Anomaly detection, NLP, Deep learning, Predictive analytics, Predictive maintenance ,Python, Added advantage Data visualization
Read more
Job posted by
HariPrasad Jonnalagadda

Data Scientist

at TVS Credit Services Ltd

Founded 2009  •  Services  •  100-1000 employees  •  Profitable
Data Science
R Programming
Python
Machine Learning (ML)
Hadoop
SQL server
Linear regression
Predictive modelling
icon
Chennai
icon
4 - 10 yrs
icon
₹10L - ₹20L / yr
Job Description: Be responsible for scaling our analytics capability across all internal disciplines and guide our strategic direction in regards to analytics Organize and analyze large, diverse data sets across multiple platforms Identify key insights and leverage them to inform and influence product strategy Technical Interactions with vendor or partners in technical capacity for scope/ approach & deliverables. Develops proof of concept to prove or disprove validity of concept. Working with all parts of the business to identify analytical requirements and formalize an approach for reliable, relevant, accurate, efficientreporting on those requirements Designing and implementing advanced statistical testing for customized problem solving Deliver concise verbal and written explanations of analyses to senior management that elevate findings into strategic recommendations Desired Candidate Profile: MTech / BE / BTech / MSc in CS or Stats or Maths, Operation Research, Statistics, Econometrics or in any quantitative field Experience in using Python, R, SAS Experience in working with large data sets and big data systems (SQL, Hadoop, Hive, etc.) Keen aptitude for large-scale data analysis with a passion for identifying key insights from data Expert working knowledge in various machine learning algorithms such XGBoost, SVM Etc. We are looking candidates from the following: Experience in Unsecured Loans & SME Loans analytics (cards, installment loans) - risk based pricing analytics Experience in Differential pricing / selection analytics (retail, airlines / travel etc). Experience in Digital product companies or Digital eCommerce with Product mindset and experience Experience in Fraud / Risk from Banks, NBFC / Fintech / Credit Bureau Experience in Online media with knowledge of media, online ads & sales (agencies) - Knowledge of DMP, DFP, Adobe/Omniture tools, Cloud Experience in Consumer Durable Loans lending companies (Experience in Credit Cards, Personal Loan - optional) Experience in Tractor Loans lending companies (Experience in Farm) Experience in Recovery, Collections analytics Experience in Marketing Analytics with Digital Marketing, Market Mix modelling, Advertising Technology
Read more
Job posted by
Vinodhkumar Panneerselvam
Big Data
Hadoop
Data engineering
data engineer
Google Cloud Platform (GCP)
Data Warehouse (DWH)
ETL
Systems Development Life Cycle (SDLC)
Java
Scala
Python
SQL
Scripting
Teradata
HiveQL
Pig
Spark
Apache Kafka
Windows Azure
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹4L - ₹16L / yr
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Read more
Job posted by
Sanjay Biswakarma

Data Analyst

at Srijan Technologies

Founded 2002  •  Products & Services  •  100-1000 employees  •  Profitable
Data Analytics
Data modeling
Python
PySpark
ETL
SQL
Axure
Amazon Web Services (AWS)
icon
Remote only
icon
3 - 8 yrs
icon
₹5L - ₹12L / yr

Role Description:

  • You will be part of the data delivery team and will have the opportunity to develop a deep understanding of the domain/function.
  • You will design and drive the work plan for the optimization/automation and standardization of the processes incorporating best practices to achieve efficiency gains.
  • You will run data engineering pipelines, link raw client data with data model, conduct data assessment, perform data quality checks, and transform data using ETL tools.
  • You will perform data transformations, modeling, and validation activities, as well as configure applications to the client context. You will also develop scripts to validate, transform, and load raw data using programming languages such as Python and / or PySpark.
  • In this role, you will determine database structural requirements by analyzing client operations, applications, and programming.
  • You will develop cross-site relationships to enhance idea generation, and manage stakeholders.
  • Lastly, you will collaborate with the team to support ongoing business processes by delivering high-quality end products on-time and perform quality checks wherever required.

Job Requirement:

  • Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
  • 3+ years of professional work experience with a reputed analytics firm
  • Expertise in handling large amount of data through Python or PySpark
  • Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
  • Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
  • Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
  • A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
  • Strong problem-solving, requirement gathering, and leading.
  • Track record of completing projects successfully on time, within budget and as per scope

Read more
Job posted by
PriyaSaini

Machine Learning Scientist

at Turing

Founded 2018  •  Product  •  100-500 employees  •  Raised funding
Machine Learning (ML)
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Decision trees
Deep Learning
Neural networks
TensorFlow
Keras
Python
PyTorch
icon
Remote only
icon
5 - 10 yrs
icon
₹10L - ₹20L / yr

About Turing:

Turing enables U.S. companies to hire the world’s best remote software engineers. 100+ companies including those backed by Sequoia, Andreessen, Google Ventures, Benchmark, Founders Fund, Kleiner, Lightspeed, and Bessemer have hired Turing engineers. For more than 180,000 engineers across 140 countries, we are the preferred platform for finding remote U.S. software engineering roles. We offer a wide range of full-time remote opportunities for full-stack, backend, frontend, DevOps, mobile, and AI/ML engineers.

We are growing fast (our revenue 15x’d in the past 12 months and is accelerating), and we have raised $14M in seed funding (one of the largest in Silicon Valley) from:

  • Facebook’s 1st CTO and Quora’s Co-Founder (Adam D’Angelo)
  • Executives from Google, Facebook, Square, Amazon, and Twitter
  • Foundation Capital (investors in Uber, Netflix, Chegg, Lending Club, etc.)
  • Cyan Banister
  • Founder of Upwork (Beerud Sheth)

We also raised a much larger round of funding in October 2020 that we will be publicly announcing over the coming month. 

Some articles about Turing:


Turing is led by successful repeat founders Jonathan Siddharth and Vijay Krishnan, whose last A.I. company leveraged elite remote talent and had a successful acquisition. (Techcrunch story). Turing’s leadership team is composed of ex-Engineering and Sales leadership from Facebook, Google, Uber, and Capgemini.


About the role:

Software developers from all over the world have taken 200,000+ tests and interviews on Turing. Turing has also recommended thousands of developers to its customers and got customer feedback in terms of customer interview pass/fail data and data from the success of the collaboration with a U.S customer. This generates a massive proprietary dataset with a rich feature set comprising resume and test/interview features and labels in the form of actual customer feedback. Continuing rapid growth in our business creates an ever-increasing data advantage for us. 

 

We are looking for a Machine Learning Scientist who can help solve a whole range of exciting and valuable machine learning problems at Turing. Turing collects a lot of valuable heterogeneous signals about software developers including their resume, GitHub profile and associated code and a lot of fine-grained signals from Turing’s own screening tests and interviews (that span various areas including Computer Science fundamentals, project ownership and collaboration, communication skills, proactivity and tech stack skills), their history of successful collaboration with different companies on Turing, etc. 

A machine learning scientist at Turing will help create deep developer profiles that are a good representation of a developer’s strengths and weaknesses as it relates to their probability of getting successfully matched to one of Turing’s partner companies and having a fruitful long-term collaboration. The ML scientist will build models that are able to rank developers for different jobs based on their probability of success at the job. 

     
 You will also help make Turing’s tests more efficient by assessing their ability to predict the probability of a successful match of a developer with at least one company. The prior probability of a registered developer getting matched with a customer is about 1%. We want our tests to adaptively reduce perplexity as steeply as possible and move this probability estimate rapidly toward either 0% or 100%; maximize expected information-gain per unit time in other words. 

As an ML Scientist on the team, you will have a unique opportunity to make an impact by advancing ML models and systems, as well as uncovering new opportunities to apply machine learning concepts to Turing product(s).

This role will directly report to Turing’s founder and CTO, Vijay Krishnan. This is his Google Scholar profile


Responsibilities:

  • Enhance our existing machine learning systems using your core coding skills and ML knowledge.
  • Take end to end ownership of machine learning systems - from data pipelines, feature engineering, candidate extraction, model training, as well as integration into our production systems.
  • Utilize state-of-the-art ML modeling techniques to predict user interactions and the direct impact on the company’s top-line metrics.
  • Design features and builds large scale recommendation systems to improve targeting and engagement.
  • Identify new opportunities to apply machine learning to different parts of our product(s) to drive value for our customers.

 

 

Minimum Requirements:

 

  • BS, MS, or Ph.D. in Computer Science or a relevant technical field (AI/ML preferred).
  • Extensive experience building scalable machine learning systems and data-driven products working with cross-functional teams
  • Expertise in machine learning fundamentals, applicable to search - Learning to Rank, Deep Learning, Tree-Based Models, Recommendation Systems, Relevance and Data mining, understanding of NLP approaches like W2V or Bert.
  • 2+ years of experience applying machine learning methods in settings like recommender systems, search, user modeling, graph representation learning, natural language processing.
  • Strong understanding of neural network/deep learning, feature engineering, feature selection, optimization algorithms. Proven ability to dig deep into practical problems and choose the right ML method to solve them.
  • Strong programming skills in Python and fluency in data manipulation (SQL, Spark, Pandas) and machine learning (scikit-learn, XGBoost, Keras/Tensorflow) tools.
  • Good understanding of mathematical foundations of machine learning algorithms.
  • Ability to be available for meetings and communication during Turing's "coordination hours" (Mon - Fri: 8 am to 12 pm PST).

 

Other Nice-to-have Requirements:

 

  • First author publications in ICML, ICLR, NeurIPS, KDD, SIGIR, and related conferences/journals. 
  • Strong performance in Kaggle competitions.
  • 5+ years of industry experience or a Ph.D. with 3+ years of industry experience in applied machine learning in similar problems e.g. ranking, recommendation, ads, etc.
  • Strong communication skills.
  • Experienced in leading large-scale multi-engineering projects.
  • Flexible, and a positive team player with outstanding interpersonal skills.
Read more
Job posted by
Misbah Munir

Sr Informatica developer

at 15 years US based Product Company

Informatica
informatica developer
Informatica MDM
Data integration
Informatica Data Quality
Data mapping
Shell Scripting
AWS Lambda
Amazon S3
SQL
Amazon Web Services (AWS)
Unix
icon
Chennai, Bengaluru (Bangalore), Hyderabad
icon
4 - 10 yrs
icon
₹9L - ₹20L / yr
  • Should have good hands-on experience in Informatica MDM Customer 360, Data Integration(ETL) using PowerCenter, Data Quality.
  • Must have strong skills in Data Analysis, Data Mapping for ETL processes, and Data Modeling.
  • Experience with the SIF framework including real-time integration
  • Should have experience in building C360 Insights using Informatica
  • Should have good experience in creating performant design using Mapplets, Mappings, Workflows for Data Quality(cleansing), ETL.
  • Should have experience in building different data warehouse architecture like Enterprise,
  • Federated, and Multi-Tier architecture.
  • Should have experience in configuring Informatica Data Director in reference to the Data
  • Governance of users, IT Managers, and Data Stewards.
  • Should have good knowledge in developing complex PL/SQL queries.
  • Should have working experience on UNIX and shell scripting to run the Informatica workflows and to control the ETL flow.
  • Should know about Informatica Server installation and knowledge on the Administration console.
  • Working experience with Developer with Administration is added knowledge.
  • Working experience in Amazon Web Services (AWS) is an added advantage. Particularly on AWS S3, Data pipeline, Lambda, Kinesis, DynamoDB, and EMR.
  • Should be responsible for the creation of automated BI solutions, including requirements, design,development, testing, and deployment
Read more
Job posted by
Ramya D

Data Engineer

at Japan Based Leading Company

Agency job
via Bleuming Technology
Big Data
Amazon Web Services (AWS)
Java
Python
MySQL
DevOps
Hadoop
icon
Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹0L - ₹20L / yr
A data engineer with AWS Cloud infrastructure experience to join our Big Data Operations team. This role will provide advanced operations support, contribute to automation and system improvements, and work directly with enterprise customers to provide excellent customer service.
The candidate,
1. Must have a very good hands-on technical experience of 3+ years with JAVA or Python
2. Working experience and good understanding of AWS Cloud; Advanced experience with IAM policy and role management
3. Infrastructure Operations: 5+ years supporting systems infrastructure operations, upgrades, deployments using Terraform, and monitoring
4. Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR
5. Knowledge on PostgreSQL/MySQL/Dynamo DB backend operations
6. DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Jenkins)
7. Version Control: Working experience with one or more version control platforms like GitHub or GitLab
8. Knowledge on AWS Quick sight reporting
9. Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, AWS CloudTrail, Datadog and Elastic Search
10. Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB) and high availability architecture
11. Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. Familiar with penetration testing and scan tools for remediation of security vulnerabilities.
12. Demonstrated successful experience learning new technologies quickly
WHAT WILL BE THE ROLES AND RESPONSIBILITIES?
1. Create procedures/run books for operational and security aspects of AWS platform
2. Improve AWS infrastructure by developing and enhancing automation methods
3. Provide advanced business and engineering support services to end users
4. Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs
5. Research and deploy new tools and frameworks to build a sustainable big data platform
6. Assist with creating programs for training and onboarding for new end users
7. Lead Agile/Kanban workflows and team process work
8. Troubleshoot issues to resolve problems
9. Provide status updates to Operations product owner and stakeholders
10. Track all details in the issue tracking system (JIRA)
11. Provide issue review and triage problems for new service/support requests
12. Use DevOps automation tools, including Jenkins build jobs
13. Fulfil any ad-hoc data or report request queries from different functional groups
Read more
Job posted by
Ameem Iqubal

Data Engineer

at Rely

Founded 2018  •  Product  •  20-100 employees  •  Raised funding
Python
Hadoop
Spark
Amazon Web Services (AWS)
Big Data
Amazon EMR
RabbitMQ
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹8L - ₹35L / yr

Intro

Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.


What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.

• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.

  • Create and maintain optimal data pipeline architecture and ETL processes
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Develop data pipeline and infrastructure to support real-time decisions
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.


What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale

  • Proficiency in writing and debugging complex SQLs
  • Experience working with AWS big data tools
    • Ability to lead the project and implement best data practises and technology

Data Pipelining

  • Strong command in building & optimizing data pipelines, architectures and data sets
  • Strong command on relational SQL & noSQL databases including Postgres
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

Big Data: Strong experience in big data tools & applications

  • Tools: Hadoop, Spark, HDFS etc
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, Flink etc.
  • Message queuing: RabbitMQ, Spark etc

Software Development & Debugging

  • Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
  • Strong hold on data structures & algorithms

What would be a bonus

  • Prior experience working in a fast-growth Startup
  • Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data
Read more
Job posted by
Hizam Ismail
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Porter.in?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort