Data Architect

at Searce Inc

DP
Posted by Reena Bandekar
icon
Mumbai
icon
5 - 9 yrs
icon
₹15L - ₹22L / yr
icon
Full time
Skills
Big Data
Hadoop
Spark
Apache Hive
ETL
Apache Kafka
Data architecture
Google Cloud Platform (GCP)
Python
Java
Scala
Data engineering
JD of Data Architect
As a Data Architect, you work with business leads, analysts and data scientists to understand the business domain and manage data engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.

What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Collaborate with many teams - engineering and business, to build better data products
● Manage team and handle delivery of 2-3 projects

What We’re Looking For
● Over 4-6 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
● Hands-on experience in cloud platforms like AWS, GCP etc.
● Good communication skills and strong analytical skills
● Experience in team handling and project delivery
Read more

About Searce Inc

Searce is a cloud, automation & analytics led process improvement company helping futurify businesses. Searce is a premier partner for Google Cloud for all products and services. Searce is the largest Cloud Systems Integrator for enterprises with the largest # of enterprise Google Cloud clients in India.

 

Searce specializes in helping businesses move to cloud, build on the next generation cloud, adopt SaaS - Helping reimagine the ‘why’ & redefining ‘what’s next’ for workflows, automation, machine learning & related futuristic use cases. Searce has been recognized by Google as one of the Top partners for the year 2015, 2016.

 

Searce's organizational culture encourages making mistakes and questioning the status quo and that allows us to specialize in simplifying complex business processes and use a technology agnostic approach to create, improve and deliver.

 

Read more
Founded
2004
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

GCP-Data/Bigdata Lead Engineer

at Hiring for MNC company

Agency job
via Response Informatics
Big Data
Data engineering
Hadoop
Spark
Apache Hive
Google Cloud Platform (GCP)
icon
Hyderabad, Bengaluru (Bangalore), Chennai, Pune
icon
5 - 7 yrs
icon
₹5L - ₹25L / yr

Job roles and responsibilities:

  • Minimum 3 to 4 years hands-on designing, building and operationalizing large-scale enterprise data solutions and applications using GCP data and analytics services like, Cloud DataProc, Cloud Dataflow, Cloud BigQuery, Cloud PubSub, Cloud Functions.
  • Hands-on experience in analyzing, re-architecting and re-platforming on-premise data warehouses to data platforms on GCP cloud using GCP/3rd party services.
  • Experience in designing and building data pipelines within a hybrid big data architecture using Java, Python, Scala & GCP Native tools.
  • Hands-on Orchestrating and scheduling Data pipelines using Composer, Airflow.
  • Experience in performing detail assessments of current state data platforms and creating an appropriate transition path to GCP cloud

Technical Skills Required:

  • Strong Experience in GCP data and Analytics Services
  • Working knowledge on Big data ecosystem-Hadoop, Spark, Hbase, Hive, Scala etc
  • Experience in building and optimizing data pipelines in Spark
  • Strong skills in Orchestration of workflows with Composer/Apache Airflow
  • Good knowledge on object-oriented scripting languages: Python (must have) and Java or C++.
  • Good to have knowledge in building CI/CD pipelines with GCP Cloud Build and native GCP services
Read more
Job posted by
Swagatika Sahoo

Senior Data Engineer

at Waterdip Labs

Founded 2021  •  Products & Services  •  0-20 employees  •  Profitable
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
Apache Spark
SQL
Amazon Redshift
Apache Kafka
Amazon Web Services (AWS)
Git
CI/CD
Apache airflow
icon
Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹15L - ₹30L / yr
About The Company
 
Waterdip Labs is a deep tech company founded in 2021. We are building an Open-Source Observability platform for AI. Our platform will help data engineers and data scientists to observe data and ml model performance in production.
Apart from the product, we are helping a few of our clients to build data and ML products.
Join us to help building the India’s 1st Open Source MLOps product.

About The Founders
Both founders are 2nd-timend time founders. Their 1st venture Inviz AI Solutions (https://www.inviz.ai) is a bootstrapped venture and became a prominent software service provider with several fortune 500 clients and a team of 100+ engineers.
Subhankar is an IIT Kharagpur alum with 10+ years of the experience software industry. He built some of the world-class Data and ML systems for companies like Target, Tesco, Falabella, and SAAS products for multiple India and USA-based start-ups. https://www.linkedin.com/in/wsubhankarb/
Gaurav is an IIT Dhanbad alum. He started his career in Tesco Labs as a data scientist for retail applications and gradually moved on to a more techno-functional role in major retail companies like Tesco, Falabella, Fazil, Lowes, and Aditya Birla group. https://www.linkedin.com/in/kumargaurav2596/
They started Waterdip with a vision to build world-class open-source software out of India.

About the Job
The client is a publicly owned, global technology company with 48 offices in 17 countries. It provides software design and delivery, tools and consulting services. The client is closely associated with the movement for agile software development and has contributed to the content of opensource products.

Job responsibilities
• Analyze, organize raw data, build data pipeline and test scripts
• Understand the business process, job orchestration from the SSIS packages
• Explore ways to enhance data quality, optimize and improve reliability
• Developing data pipelines to process event-streaming data
• Implementation of data standards, policies and data security
• Develop CI/CD pipeline to deploy and maintain data pipelines
 
Job qualifications
5-7 years of experience
 
Technical skills
• You should have experience with Python and anymore development languages
• Have working experience in SQL
• You have exposure to Data processing tooling and Data visualization
• Comfortability with Azure proficiency, CI/CD and Git
• Have understanding on Data quality, data pipelines, data storage, distributed systems architecture, data security, data privacy & ethics, data modeling, data infrastructure & operations and Business Intelligence
• Bonus points if you have prior working knowledge in creating data products and/or prior experience on Azure Data Catalog, Azure Event Hub
• Workflow management platform like Airflow.
• Large scale data processing tool like Apache Spark.
• Distributed messaging platform like Apache Kafka.

Professional skills
• You enjoy influencing others and always advocate for technical excellence while being open to change when needed
• Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
• You're resilient in ambiguous situations and can approach challenges from multiple perspectives
 
 
 
 
 
 
 
 
 
 
 
 
Read more
Job posted by
Subhankar Biswas

Lead Data Scientist

at Metadata Technology North America

Agency job
via RS Consultants
Data Science
Machine Learning (ML)
Python
sagemaker
Go Programming (Golang)
Scikit-Learn
pandas
NumPy
Amazon Web Services (AWS)
Data Analytics
TensorFlow
Apache Kafka
Real time media streaming
Airflow
icon
Remote only
icon
8 - 16 yrs
icon
₹20L - ₹50L / yr
Data Scientist Lead / Manager
Job Description:
We are looking for an exceptional Data Scientist Lead / Manager who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes of daily data for various use cases.

Location: Pune (Initially remote due to COVID 19)

*****Looking for someone who can start immediately / Within a month. Hands-on experience in Python programming (Minimum 5 Years) is a must.


About the Organisation :

- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.

- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom and India.

- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.


Qualifications:
• 8+ years relevant working experience
• Master / Bachelors in computer science or engineering
• Working knowledge of Python and SQL
• Experience in time series data, data manipulation, analytics, and visualization
• Experience working with large-scale data
• Proficiency of various ML algorithms for supervised and unsupervised learning
• Experience working in Agile/Lean model
• Experience with Java and Golang is a plus
• Experience with BI toolkit such as Tableau, Superset, Quicksight, etc is a plus
• Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Dask, Tensorflow, PyTorch, Keras, GCP ML Stack
• Exposure to modern Big Data tech such as Cassandra/Scylla, Kafka, Ceph, Hadoop, Spark
• Exposure to IAAS platforms such as AWS, GCP, Azure

Typical persona: Data Science Manager/Architect
Experience: 8+ years programming/engineering experience (with at least last 4 years in Data science in a Product development company)
Type: Hands-on candidate only

Must:
a. Hands-on Python: pandas,scikit-learn
b. Working knowledge of Kafka
c. Able to carry out own tasks and help the team in resolving problems - logical or technical (25% of job)
d. Good on analytical & debugging skills
e. Strong communication skills

Desired (in order of priorities)
a.Go (Strong advantage)
b. Airflow (Strong advantage)
c. Familiarity & working experience on more than one type of database: relational, object, columnar, graph and other unstructured databases
d. Data structures, Algorithms
e. Experience with multi-threaded and thread sync concepts
f. AWS Sagemaker
g. Keras
Read more
Job posted by
Biswadeep RS

Big Data Engineer

at A Telecom Industry

Agency job
via Multi Recruit
Big Data
Apache Spark
Java
Spring Boot
restful
icon
Bengaluru (Bangalore)
icon
6 - 10 yrs
icon
₹16L - ₹18L / yr
  • Expert software implementation and automated testing
  • Promoting development standards, code reviews, mentoring, knowledge sharing
  • Improving our Agile methodology maturity
  • Product and feature design, scrum story writing
  • Build, release, and deployment automation
  • Product support & troubleshooting

 

Who we have in mind: 

  • Demonstrated experience as a Java
  • Should have a deep understanding of Enterprise/Distributed Architecture patterns and should be able to demonstrate the relevant usage of the same
  • Turn high-level project requirements into application-level architecture and collaborate with the team members to implement the solution
  • Strong experience and knowledge in Spring boot framework and microservice architecture
  • Experience in working with Apache Spark
  • Solid demonstrated object-oriented software development experience with Java, SQL, Maven, relational/NoSQL databases and testing frameworks 
  • Strong working experience with developing RESTful services
  • Should have experience working on Application frameworks such as Spring, Spring Boot, AOP
  • Exposure to tools – Jira, Bamboo, Git, Confluence would be an added advantage
  • Excellent grasp of the current technology landscape, trends and emerging technologies
Read more
Job posted by
Sukanya J
Data Science
Data Scientist
Machine Learning (ML)
R Programming
Python
icon
NCR (Delhi | Gurgaon | Noida)
icon
5 - 10 yrs
icon
₹50L - ₹60L / yr

Job Description – Sr. Data Scientist

 

Credgenics is India’s first of it's kind NPA resolution platform backed by credible investors including Accel Partners and Titan Capital. We work with financial institutions, Banks, NBFCs & Digital lending firms to improve their collections efficiency using technology, automation intelligence and optimal legal routes in order to facilitate the resolution of stressed assets. With all major banks and NBFCs as our clients, our SaaS based collections platform helps them efficiently improve their NPA, geographic reach and customer experience.

 

About the Role:

We are looking for a highly-skilled, experienced, and passionate Sr. Data Scientist who can come on-board and help create and build a robust, scalable, and extendable platform to power the mission to reduce the exponentially growing Non Performing Assets in the Indian Economy by harnessing the power of technology and data driven analytics. Our focus is to provide deep insights that improve collection efficiency across delinquent portfolios of ARCs, Banks & NBFCs.

 

The ideal candidate would be someone who has worked in a data science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, have built platforms from scratch and have a track record of developing and deploying data-science models into live applications.

Responsibilities:

  • Work with the CTO to build the roadmap of data science function and set up the best practices
  • Collaborate with and influence leadership to ensure data science directly impacts strategy
  • Drive an organizational effort toward a better understanding of user needs and pain points, and propose solutions that data science can provide to further this goal
  • Build and deploy Machine Learning models in production systems
  • Building platforms from scratch to solve image recognition and context understanding problems, and also improving search
  • Work with large, complex data sets. Solve difficult, non-routine analysis problems, applying advanced analytical methods as needed
  • Conduct analysis that includes data gathering and requirements specification, processing, analysis, ongoing deliverables, and presentations
  • Develop comprehensive knowledge of data structures and metrics, advocating for changes, where needed for product development
  • Interact cross-functionally, making business recommendations (e.g., cost-benefit, forecasting, experiment analysis) with effective presentations of findings at multiple levels of stakeholders through visual displays of quantitative information
  • Research and develop analysis, forecasting, and optimization methods to improve the product quality
  • Build and prototype analysis pipelines iteratively to provide insights at scale
  • Building/maintaining of reports, dashboards, and metrics to monitor the performance of our products
  • Develop deep partnerships with engineering and product teams to deliver on major cross-functional measurements, testing, and modelling



 

 

 

 

 

 

Requirements and Qualifications:

  • Bachelor's or Master's degree in a technology-related field from a premier college
  • Prior 5+ years of experience of leading data science team in a start-up environment
  • Experience working on unstructured data is a plus
  • Deep knowledge of designing, planning, testing, and deploying analytical solutions
  • Implementing advanced AI solutions using at least one scripting language (e.g. Python, R) 
  • Customer oriented, responsive to changes, and able to multi-task in a fast-paced environment


Location:

This role will be based out of New Delhi

 

We offer an innovative, fast paced, and entrepreneurial work environment where you’ll be at the centre of leading change by leveraging technology and creating boundless impact in the FinTech ecosystem.

 

 

 

Read more
Job posted by
Deleted User

Director | Applied AI

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Artificial Intelligence (AI)
Natural Language Processing (NLP)
Machine Learning (ML)
Deep Learning
Research and development
TensorFlow
Spark
Hadoop
Data Analytics
Data Science
Engineering Management
Django
HTML/CSS
Flask
Google Cloud Platform (GCP)
icon
Pune
icon
10 - 14 yrs
icon
₹30L - ₹35L / yr

Director - Applied AI


Who we are?

Searce is  a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.


What do we believe?

  • Best practices are overrated
      • Implementing best practices can only make one an average .
  • Honesty and Transparency
      • We believe in naked truth. We do what we tell and tell what we do.
  • Client Partnership
    • Client - Vendor relationship: No. We partner with clients instead. 
    • And our sales team comprises 100% of our clients.

How do we work ?

It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.

  • Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
  • Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
  • Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
  • Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
  • Innovative: Innovate or Die. We love to challenge the status quo.
  • Experimental: We encourage curiosity & making mistakes.
  • Responsible: Driven. Self motivated. Self governing teams. We own it.

So, what are we hunting for ?

  1. To devise strategy through the delivery of sustainable intelligent solutions, strategic customer engagements, and research and development
  2. To enable and lead our data and analytics team and develop machine learning and AI paths across strategic programs, solution implementation, and customer relationships
  3. To manage existing customers and realize new opportunities and capabilities of growth
  4. To collaborate with different stakeholders for delivering automated, high availability and secure solutions
  5. To develop talent and skills to create a high performance team that delivers superior products
  6. To communicate effectively across the organization to ensure that the team is completely aligned to business objectives
  7. To build strong interpersonal relationships with peers and other key stakeholders that will contribute to your team's success

Your bucket of Undertakings :

  1. Develop an AI roadmap aligned to client needs and vision
  2. Develop a Go-To-Market strategy of AI solutions for customers
  3. Build a diverse cross-functional team to identify and prioritize key areas of the business across AI, NLP and other cognitive solutions that will drive significant business benefit
  4. Lead AI R&D initiatives to include prototypes and minimum viable products
  5. Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc. 
  6. Build reusable and scalable solutions for use across the customer base
  7. Create AI white papers and enable strategic partnerships with industry leaders
  8. Align, mentor, and manage, team(s) around strategic initiatives
  9. Prototype and demonstrate AI related products and solutions for customers
  10. Establish processes, operations, measurement, and controls for end-to-end life-cycle management of the digital workforce (intelligent systems)
  11. Lead AI tech challenges and proposals with team members
  12. Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
  13. Identify new business opportunities and prioritize pursuits for AI 

Education & Experience : 

  1. Advanced or basic degree (PhD with few years experience, or MS / BS (with many years experience)) in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
  2. 10+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
  3. Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
  4. Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
  5. Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
  6. Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
  7. Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team
Read more
Job posted by
Mishita Juneja

Data Engineer

at Streetmark

SCCM
PL/SQL
APPV
Stani's Python Editor
AWS Simple Notification Service (SNS)
Amazon Web Services (AWS)
Python
Microsoft App-V
icon
Remote, Bengaluru (Bangalore), Chennai
icon
3 - 9 yrs
icon
₹3L - ₹20L / yr

Hi All,

We are hiring Data Engineer for one of our client for Bangalore & Chennai Location.


Strong Knowledge of SCCM, App V, and Intune infrastructure.

Powershell/VBScript/Python,

Windows Installer

Knowledge of Windows 10 registry

Application Repackaging

Application Sequencing with App-v

Deploying and troubleshooting applications, packages, and Task Sequences.

Security patch deployment and remediation

Windows operating system patching and defender updates

 

Thanks,
Mohan.G

Read more
Job posted by
Mohan Guttula

Data Scientist

at first principle labs

Founded 2019  •  Products & Services  •  20-100 employees  •  Raised funding
Data Science
Python
R Programming
Big Data
Hadoop
icon
Pune
icon
3 - 7 yrs
icon
₹12L - ₹18L / yr
The selected would be a part of the inhouse Data Labs team. He/she would be responsible to creation insights-driven decision structure.

This will include:

Scorecards
Strategies
MIS

The verticals included are:

Risk
Marketing
Product
Read more
Job posted by
Ankit Goenka

Data Insights & Analyst

at Spoonshot Inc.

Founded 2015  •  Products & Services  •  20-100 employees  •  Raised funding
Data Analytics
Data Visualization
Analytics
SQLite
PowerBI
Data management
Python
Big Data
SQL
Tableau
icon
Bengaluru (Bangalore)
icon
1 - 4 yrs
icon
₹9L - ₹15L / yr
- Prior experience in Business Analytics and knowledge of related analysis or visualization tools
- Expecting a minimum of 2-4 years of relevant experience
- You will be managing a team of 3 currently
- Take up the ownership of developing and managing one of the largest and richest food (recipe, menu, and CPG) databases
- Interactions with cross-functional teams (Business, Food Science, Product, and Tech) on a regular basis to pan the future of client and internal food data management
- Should have a natural flair for playing with numbers and data and have a keen eye for detail and quality
- Will spearhead the Ops team in achieving the targets while maintaining a staunch attentiveness to Coverage, Completeness, and Quality of the data
- Shall program and manage projects while identifying opportunities to optimize costs and processes.
- Good business acumen, in creating logic & process flows, quick and smart decision-making skills are expected
- Will also be responsible for the recruitment, induction and training new members as well
- Setting competitive team targets. Guide and support the team members to go the extra mile and achieve set targets


Added Advantages :
- Experience in a Food Sector / Insights company
- Has a passion for exploring different cuisines
- Understands industry-related jargons and has a natural flair towards learning more about anything related to food
Read more
Job posted by
Rajesh Bhutada

SDE III - Data

at Dream Game Studios

Founded 2012  •  Product  •  100-500 employees  •  Raised funding
Big Data
Java
Scala
Web Scraping
Cassandra
Data Modeling
icon
Mumbai, Navi Mumbai
icon
3 - 9 yrs
icon
₹12L - ₹30L / yr
Your Role: · As an integral part of the Data Engineering team, be involved in the entire development lifecycle from conceptualization to architecture to coding to unit testing · Build realtime and batch analytics platform for analytics & machine-learning · Design, propose and develop solutions keeping the growing scale & business requirements in mind · Help us design the Data Model for our data warehouse and other data engineering solutions Must Have​: · Understands Data very well and has extensive Data Modelling experience · Deep understanding of real-time as well as batch processing big data technologies (Spark, Storm, Kafka, Flink, MapReduce, Yarn, Pig, Hive, HDFS, Oozie etc) · Experience developing applications that work with NoSQL stores (e.g., ElasticSearch, Hbase, Cassandra, MongoDB, CouchDB) · Proven programming experience in Java or Scala · Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc · Experience in cloud based data stores like Redshift and Big Query is an advantage Bonus: · Love sports – especially cricket and football · Have worked previously in a high-growth tech startup
Read more
Job posted by
Vivek Pandey
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Searce Inc?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort