Similar jobs
● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results
Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
About Amber (https://amberstudent.com)
Long-term accommodation booking platform for students (think booking.com for
student housing). Amber helps 80M students worldwide, find and book full-time accommodations near their universities, without the hassle of negotiation, nonstandardized and cumbersome paperwork, and a broken payment process.
We are the leading student housing platform globally, with 1M+ student housing units listed in 6 countries and across 80 cities.
We are growing rapidly and targeting $400M in annual gross bookings value by 2022.
If you are passionate about making international mobility and living, seamless and accessible, then - Join us in building the future of student housing!
We are amongst the fastest growing companies in Asia-Pacific as per
Financial times https://www.ft.com/high-growth-asia-pacific-ranking-2022 .
Responsibilities
- In charge of converting raw data into usable information for analytics and business decision-making
- Setting up accurate data pipelines to structure the Data and optimize the cost
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, and re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
- Work with stakeholders including the Executive, Product, Analytics and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Requirements
- Minimum 2 years of previous experience as a data engineer or in a similar role.
- Technical expertise in data models, data mining, and segmentation
- techniques.
- Knowledge and hands-on with of programming languages (e.g. Java, Python
- and Scala)
- Hands-on experience with SQL database design and AWS lambda function.
- Experience with big data tools: Spark, and Kafka.
- Experience with AWS cloud services: Redshift and S3.
- Experience in ETL frameworks like AWS Glue.
- Experience in designing Data warehousing and streaming processes.
What will you get from amber:
- Fast-paced growth (can skip intermediate levels)
- Total freedom and authority (everything under you, just get the job done!)
- Open and Inclusive Environment
- Great Compensation (and ESOPs)
Responsibilities include:
- Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
- Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
- Build data ingestion and data transformation infrastructure
- Automate infrastructure that the data science team uses
- Perform statistical analysis and tune the results so that the organization can make better-informed decisions
- Set up and manage AI development and product infrastructure
- Be a good team player, as coordinating with others is a must
●Set department objectives.
●Hire, promote, motivate, train, mentor and incentivize the team.
●Innovate, Experiment, and Implement new technologies.
● Contribute to the next level of growth for the AI practice.
RESPONSIBILITY
●Lead and manage the AI team within the global AI practice
●Work closely with data scientists and AI engineers to create and deploy models catering to
customer requirements
●Establish scalable, efficient, automated processes for data analysis, model development,
validation, deployment, serving, and monitoring
●Work closely with data engineering practice to build and deploy end-to-end AI pipelines
including data processing, model training, and model deployment.
●Ability to build and deploy large-scale enterprise-ready solutions for AI.
●Own and deliver end-to-end large, complex projects within the AI practice.
●Support sales and BD process and present to CXO-level client representatives.
●Work with clients to identify new AI opportunities
●Prepare together with the Sales, Solutioning, and Engineering teams to develop and propose
cutting edge AI solutions
●Contribute to building AI proposals, attending Orals, and providing easy to understand
communications on AI
●Ability to manage Client Relationships
●Cooperate and contribute to Global AI programs
●Reviews proposed designs and make recommendations for improvement.
●Contribute to and promote good software engineering practices across the team.
●Knowledge sharing with the team to adopt best practices,
●Actively contribute to and re-use community best practices.
About Our Company:
●We built an end-to-end AI framework to help our clients to accelerate their journey to launch
models
●We work closely with academic experts and research groups to solve some of the niches
problems in medical imaging, biopharma, life sciences, law firms, retail, and agriculture
●Work environment – we have an environment to create an impact on the client's business and
transform innovative ideas into reality. Even our junior engineers get the opportunity to work
on different product features in complex domains
●Open communication, flat hierarchy, plenty of individual responsibility
Hiring For Data Engineer - Bangalore (Novel Tech Park)
Salary : Max upto 15LPA
Experience : 3-5years
- We are looking for an experienced (3-5 years) Data Engineers to join our team in Bangalore.
- Someone who can help client to build scalable, reliable, and secure Data analytic solutions.
Technologies you will get to work with:
1.Azure Data-bricks
2.Azure Data factory
3.Azure DevOps
4.Spark with Python & Scala and Airflow scheduling.
What You will Do: -
* Build large-scale batch and real-time data pipelines with data processing frameworks like spark, Scala on Azure platform.
* Collaborate with other software engineers, ML engineers and stakeholders, taking learning and leadership opportunities that will arise every single day.
* Use best practices in continuous integration and delivery.
* Sharing technical knowledge with other members of the Data Engineering Team.
* Work in multi-functional agile teams to continuously experiment, iterate and deliver on new product objectives.
* You will get to work with massive data sets and learn to apply the latest big data technologies on a leading-edge platform.
Job Functions:
Information Technology Employment
Type - Full-time
Who can apply -Seniority Level / Mid / Entry level
Job responsibilities
- You will partner with teammates to create complex data processing pipelines in order to solve our clients' most complex challenges
- You will collaborate with Data Scientists in order to design scalable implementations of their models
- You will pair to write clean and iterative code based on TDD
- Leverage various continuous delivery practices to deploy, support and operate data pipelines
- Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
- Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
- Create data models and speak to the tradeoffs of different modeling approaches
- Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
- Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
- You have a good understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
- You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
- Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
- You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
- Working with data excites you: you can build and operate data pipelines, and maintain data storage, all within distributed systems
- You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
- Professional skills
- You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
- An interest in coaching, sharing your experience and knowledge with teammates
- You enjoy influencing others and always advocate for technical excellence while being open to change when needed
- Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
delivered.
• You will utilize your configuration management and software release experience; as well as
change management concepts to drive the success of the projects.
• You will partner with senior leaders to understand and communicate the business needs to
translate them into IT requirements. Consult with Customer’s Business Analysts on their Data
warehouse requirements
• You will assist the technical team in identification and resolution of Data Quality issues.
• You will manage small to medium-sized projects relating to the delivery of applications or
application changes.
• You will use Managed Services or 3rd party resources to meet application support requirements.
• You will interface daily with multi-functional team members within the EDW team and across the
enterprise to resolve issues.
• Recommend and advocate different approaches and designs to the requirements
• Write technical design docs
• Execute Data modelling
• Solution inputs for the presentation layer
• You will craft and generate summary, statistical, and presentation reports; as well as provide reporting and metrics for strategic initiatives.
• Performs miscellaneous job-related duties as assigned
Preferred Qualifications
• Strong interpersonal, teamwork, organizational and workload planning skills
• Strong analytical, evaluative, and problem-solving abilities as well as exceptional customer service orientation
• Ability to drive clarity of purpose and goals during release and planning activities
• Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail
• Excited by the opportunity to continually improve processes within a large company
• Healthcare background/ Automobile background.
• Familiarity with major big data solutions and products available in the market.
• Proven ability to drive continuous
either one of Java, Scala or Python
Experience in Bigdata Technologies (Hadoop/Spark/Hive/Presto/
platforms (Kafka/NiFi/Storm)
Experience in Distributed Search (Solr/Elastic Search), In-memory data-grid
(Redis/Ignite), Cloud native apps and Kubernetes is a plus
Experience in building REST services and API’s following best practices of service
abstractions, Micro-services. Experience in Orchestration frameworks is a plus
Experience in Agile methodology and CICD - tool integration, automation,
configuration management
Added advantage for being a committer in one of the open-source Bigdata
technologies - Spark, Hive, Kafka, Yarn, Hadoop/HDFS