Data Architect

at I Base IT

DP
Posted by Sravanthi Alamuri
icon
Hyderabad
icon
9 - 13 yrs
icon
₹10L - ₹23L / yr
icon
Full time
Skills
Data Analytics
Data Warehouse (DWH)
Data Structures
Spark
Architecture
cube building
data lake
Hadoop
Java
Data Architect who leads a team of 5 numbers. Required skills : Spark ,Scala , hadoop
Read more

About I Base IT

Founded
2011
Type
Size
Stage
Raised funding
About
We are a custom software development company. We understand the challenges faced in implementing the dream projects. We know how to overcome obstacles.
Read more
Connect with the team
icon
Sravanthi Alamuri
Company social profiles
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

AI-powered cloud-based SaaS solution
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
2 - 10 yrs
₹15L - ₹50L / yr
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
+18 more
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Read more
at Amber
1 recruiter
DP
Posted by Aarti Sharma
Pune
2 - 3 yrs
₹15L - ₹17L / yr
Python
Amazon Web Services (AWS)
Big Data
ETL
Java
+9 more

About Amber (https://amberstudent.com)
Long-term accommodation booking platform for students (think booking.com for
student housing). Amber helps 80M students worldwide, find and book full-time accommodations near their universities, without the hassle of negotiation, nonstandardized and cumbersome paperwork, and a broken payment process.

We are the leading student housing platform globally, with 1M+ student housing units listed in 6 countries and across 80 cities.

We are growing rapidly and targeting $400M in annual gross bookings value by 2022.
If you are passionate about making international mobility and living, seamless and accessible, then - Join us in building the future of student housing!
We are amongst the fastest growing companies in Asia-Pacific as per
Financial times https://www.ft.com/high-growth-asia-pacific-ranking-2022 .

 

Responsibilities
  • In charge of converting raw data into usable information for analytics and business decision-making
  • Setting up accurate data pipelines to structure the Data and optimize the cost
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
  • Work with stakeholders including the Executive, Product, Analytics and Design teams to assist with data-related technical issues and support their data infrastructure needs.

 

Requirements
  • Minimum 2 years of previous experience as a data engineer or in a similar role.
  • Technical expertise in data models, data mining, and segmentation
  • techniques.
  • Knowledge and hands-on with of programming languages (e.g. Java, Python
  • and Scala)
  • Hands-on experience with SQL database design and AWS lambda function.
  • Experience with big data tools: Spark, and Kafka.
  • Experience with AWS cloud services: Redshift and S3.
  • Experience in ETL frameworks like AWS Glue.
  • Experience in designing Data warehousing and streaming processes.

 

What will you get from amber: 
  • Fast-paced growth (can skip intermediate levels)
  • Total freedom and authority (everything under you, just get the job done!)
  • Open and Inclusive Environment
  • Great Compensation (and ESOPs)
Read more
DP
Posted by Harshit Sharma
Remote only
8 - 15 yrs
₹10L - ₹27L / yr
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
Deep Learning
+7 more

Responsibilities include: 

  • Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
  • Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
  • Build data ingestion and data transformation infrastructure
  • Automate infrastructure that the data science team uses
  • Perform statistical analysis and tune the results so that the organization can make better-informed decisions
  • Set up and manage AI development and product infrastructure
  • Be a good team player, as coordinating with others is a must
Read more
Startup Login
Agency job
via Startup Login by Sana Adithya
Hyderabad
5 - 25 yrs
Best in industry
Artificial Intelligence (AI)
Data Science
Algorithms
Machine Learning (ML)
Data Structures
AUTHORITY
●Set department objectives.
●Hire, promote, motivate, train, mentor and incentivize the team.
●Innovate, Experiment, and Implement new technologies.
● Contribute to the next level of growth for the AI practice.
RESPONSIBILITY
●Lead and manage the AI team within the global AI practice
●Work closely with data scientists and AI engineers to create and deploy models catering to
customer requirements
●Establish scalable, efficient, automated processes for data analysis, model development,
validation, deployment, serving, and monitoring
●Work closely with data engineering practice to build and deploy end-to-end AI pipelines
including data processing, model training, and model deployment.
●Ability to build and deploy large-scale enterprise-ready solutions for AI.
●Own and deliver end-to-end large, complex projects within the AI practice.
●Support sales and BD process and present to CXO-level client representatives.
●Work with clients to identify new AI opportunities
●Prepare together with the Sales, Solutioning, and Engineering teams to develop and propose
cutting edge AI solutions
●Contribute to building AI proposals, attending Orals, and providing easy to understand
communications on AI
●Ability to manage Client Relationships
●Cooperate and contribute to Global AI programs
●Reviews proposed designs and make recommendations for improvement.
●Contribute to and promote good software engineering practices across the team.
●Knowledge sharing with the team to adopt best practices,
●Actively contribute to and re-use community best practices.
About Our Company:
●We built an end-to-end AI framework to help our clients to accelerate their journey to launch
models
●We work closely with academic experts and research groups to solve some of the niches
problems in medical imaging, biopharma, life sciences, law firms, retail, and agriculture
●Work environment – we have an environment to create an impact on the client's business and
transform innovative ideas into reality. Even our junior engineers get the opportunity to work
on different product features in complex domains
●Open communication, flat hierarchy, plenty of individual responsibility
Read more
Diggibyte Technologies
Agency job
Bengaluru (Bangalore)
2 - 3 yrs
₹10L - ₹15L / yr
Scala
Spark
Python
Microsoft Windows Azure
SQL Azure

Hiring For Data Engineer - Bangalore (Novel Tech Park)

Salary : Max upto 15LPA

Experience : 3-5years

  • We are looking for an experienced (3-5 years) Data Engineers to join our team in Bangalore.
  • Someone who can help client to build scalable, reliable, and secure Data analytic solutions.

 

Technologies you will get to work with:

 

1.Azure Data-bricks

2.Azure Data factory

3.Azure DevOps

4.Spark with Python & Scala and Airflow scheduling.

 

What You will Do: -

 

* Build large-scale batch and real-time data pipelines with data processing frameworks like spark, Scala on Azure platform.

* Collaborate with other software engineers, ML engineers and stakeholders, taking learning and leadership opportunities that will arise every single day.

* Use best practices in continuous integration and delivery.

* Sharing technical knowledge with other members of the Data Engineering Team.

* Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.

* You will get to work with massive data sets and learn to apply the latest big data technologies on a leading-edge platform.

 

Job Functions:

Information Technology Employment

Type - Full-time

Who can apply -Seniority Level / Mid / Entry level

Read more
at Thoughtworks
1 video
36 recruiters
DP
Posted by Vidyashree Kulkarni
Remote only
9 - 15 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.

Job responsibilities
  • You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges
  • You will collaborate with Data Scientists in order to design scalable implementations of their models
  • You will pair to write clean and iterative code based on TDD
  • Leverage various continuous delivery practices to deploy, support and operate data pipelines
  • Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
  • Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
  • Create data models and speak to the tradeoffs of different modeling approaches
  • Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
  • Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
Job qualifications

Technical skills

  • You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
  • You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
  • Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
  • You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
  • Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
  • You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
  • Professional skills
  • You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
  • An interest in coaching, sharing your experience and knowledge with teammates
  • You enjoy influencing others and always advocate for technical excellence while being open to change when needed
  • Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
Read more
Data Warehouse Architect
Agency job
via The Hub by Sridevi Viswanathan
Mumbai
8 - 10 yrs
₹20L - ₹23L / yr
Data Warehouse (DWH)
ETL
Hadoop
Apache Spark
Spark
+4 more
• You will work alongside the Project Management to ensure alignment of plans with what is being
delivered.
• You will utilize your configuration management and software release experience; as well as
change management concepts to drive the success of the projects.
• You will partner with senior leaders to understand and communicate the business needs to
translate them into IT requirements. Consult with Customer’s Business Analysts on their Data
warehouse requirements
• You will assist the technical team in identification and resolution of Data Quality issues.
• You will manage small to medium-sized projects relating to the delivery of applications or
application changes.
• You will use Managed Services or 3rd party resources to meet application support requirements.
• You will interface daily with multi-functional team members within the EDW team and across the
enterprise to resolve issues.
• Recommend and advocate different approaches and designs to the requirements
• Write technical design docs
• Execute Data modelling
• Solution inputs for the presentation layer
• You will craft and generate summary, statistical, and presentation reports; as well as provide reporting and metrics for strategic initiatives.
• Performs miscellaneous job-related duties as assigned

Preferred Qualifications

• Strong interpersonal, teamwork, organizational and workload planning skills
• Strong analytical, evaluative, and problem-solving abilities as well as exceptional customer service orientation
• Ability to drive clarity of purpose and goals during release and planning activities
• Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail
• Excited by the opportunity to continually improve processes within a large company
• Healthcare background/ Automobile background.
• Familiarity with major big data solutions and products available in the market.
• Proven ability to drive continuous
Read more
at StatusNeo
6 recruiters
DP
Posted by Alex P
Remote only
2 - 15 yrs
₹2L - ₹70L / yr
Data engineering
Data Engineer
Python
Big Data
Spark
+1 more
Proficiency in engineering practices and writing high quality code, with expertise in
either one of Java, Scala or Python
 Experience in Bigdata Technologies (Hadoop/Spark/Hive/Presto/HBase) & streaming
platforms (Kafka/NiFi/Storm)
 Experience in Distributed Search (Solr/Elastic Search), In-memory data-grid
(Redis/Ignite), Cloud native apps and Kubernetes is a plus
 Experience in building REST services and API’s following best practices of service
abstractions, Micro-services. Experience in Orchestration frameworks is a plus
 Experience in Agile methodology and CICD - tool integration, automation,
configuration management
 Added advantage for being a committer in one of the open-source Bigdata
technologies - Spark, Hive, Kafka, Yarn, Hadoop/HDFS
Read more
at Indium Software
16 recruiters
DP
Posted by Mohammed Shabeer
Remote only
2 - 3 yrs
₹5L - ₹8L / yr
Data Analytics
data analyst
Apache Synapse
SQL
SAP MDG ( Master Data Governance)
+1 more
The Data Analyst in the CoE will provide end to end solution development, working in conjunction with the Domain Leads and Technology Partners. He is responsible for the delivery of solutions and solution changes which are driven by the business requirements as well as providing technical and development capabilities. Knowledge
Read more
at mPaani Solutions Pvt Ltd
1 video
2 recruiters
DP
Posted by Julie K
Mumbai
3 - 7 yrs
₹5L - ₹15L / yr
Machine Learning (ML)
Python
Data Science
Big Data
R Programming
+2 more
Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at I Base IT?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort