In this role you'll get.
- Being part of core team member for data platform, setup platform foundation while adhering all required quality standards and design patterns
- Write efficient and quality code that can scale
- Adopt Bookr quality standards, recommend process standards and best practices
- Research, learn & adapt new technologies to solve problems & improve existing solutions
- Contribute to engineering excellence backlog
- Identify performance issues
- Effective code and design reviews
- Improve reliability of overall production system by proactively identifying patterns of failure
- Leading and mentoring junior engineers by example
- End-to-end ownership of stories (including design, serviceability, performance, failure handling)
- Strive hard to provide the best experience to anyone using our products
- Conceptualise innovative and elegant solutions to solve challenging big data problems
- Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products
- Adhere to company policies, procedures, mission, values, and standards of ethics and integrity
On day one we'll expect you to.
- B. E/B. Tech from a reputed institution
- Minimum 5 years of software development experience and at least a year experience in leading/guiding people
- Expert coding skills in Python/PySpark or Java/Scala
- Deep understanding in Big Data Ecosystem - Hadoop and Spark
- Must have project experience with Spark
- Ability to independently troubleshoot Spark jobs
- Good understanding of distributed systems
- Fast learner and quickly adapt to new technologies
- Prefer individuals with high ownership and commitment
- Expert hands on experience with RDBMS
- Fast learner and quickly adapt to new technologies
- Prefer individuals with high ownership and commitment
- Ability to work independently as well as working collaboratively in a team
Added bonuses you have.
- Hands on experience with EMR/Glue/Data bricks
- Hand on experience with Airflow
- Hands on experience with AWS Big Data ecosystem
We are looking for passionate Engineers who are always hungry for challenging problems. We believe in creating opportunistic, yet balanced, work environment for savvy, entrepreneurial tech individuals. We are thriving on remote work with team working across multiple timezones.
- Flexible hours & Remote work - We are a results focused bunch, so we encourage you to work whenever and wherever you feel most creative and focused.
- Unlimited PTOWe want you to feel free to recharge your batteries when you need it!
- Stock Options - Opportunity to participate in Company stock plan
- Flat hierarchy - Team leaders at your fingertips
- BFC(Stands for bureaucracy-free company). We're action oriented and don't bother with dragged-out meetings or pointless admin exercises - we'd rather get our hands dirty!
- Working along side Leaders - You being part of core team, will give you opportunity to directly work with founding and management team
About Bookr Inc
- Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
- Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
- Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
- Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
- Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
- Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
- Analyse large amounts of information to discover trends and patterns
- Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\
- Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
- 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
- Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
- Good understanding of AWS ecosystem for big data analytics
- Hands on experience in creating data pipelines either using tools or by independently writing scripts
- Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
- Strong problem solving skills with an emphasis on product development.
- Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.
We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.
Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!
Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc.
Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality
Work with the Office of the CTO as an active member of our architecture guild
Writing pipelines to consume the data from multiple sources
Writing a data transformation layer using DBT to transform millions of data into data warehouses.
Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities
Identify downstream implications of data loads/migration (e.g., data quality, regulatory)
What To Bring
5+ years of software development experience, a startup experience is a plus.
Past experience of working with Airflow and DBT is preferred
5+ years of experience working in any backend programming language.
Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL
Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)
Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects
Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure
Basic understanding of Kubernetes & docker is a must.
Experience in data processing (ETL, ELT) and/or cloud-based platforms
Working proficiency and communication skills in verbal and written English.
This position is for Big Data Engineer/Lead specialized in Hadoop, Spark and AWS Data Engineering technologies with 3 to 12 years of experience.
Roles & Responsibility
For this role, we require someone with strong product design sense. The position requires one to work on complex technical projects and closely work with peers in an innovative and fast-paced environment.
- Grow our analytics capabilities with faster, more reliable data pipelines, and better tools, handling petabytes of data every day.
- Brainstorm and create new platforms and migrate the existing ones to AWS, that can help in our quest to make data available to cluster users in all shapes and forms, with low latency and horizontal scalability.
- Make changes to our data platform, refactoring/redesigning as needed and diagnosing any problems across the entire technical stack.
- Design and develop a real-time events pipeline for Data ingestion for real-time dash-boarding.
- Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake.
- Design & implement new components and various emerging technologies in AWS, and Hadoop Eco System, and successful execution of various projects.
- Optimize and improve existing features or data processes for performance and stability.
- Conduct peer design and code reviews.
- Write unit tests and support continuous integration.
- Be obsessed about quality and ensure minimal production downtimes.
- Mentor peers, share information and knowledge and help build a great team.
- Monitor job performances, file system/disk-space management, cluster & database connectivity, log files, management of backup/security and troubleshooting various user issues.
- Collaborate with various cross-functional teams: infrastructure, network, database.
Must have skills: Python, AWS, Scala, Spark, Hadoop, Big Data Analytics
- Fluent with data structures, algorithms and design patterns.
- Strong hands-on experience with Hadoop, MapReduce, Hive, Spark.
- Excellent programming/debugging skills in Java/Scala.
- Experience with any scripting language such as Python, Bash etc.
- Good to have experience of working with No SQL databases like HBase, Cassandra.
- Experience with BI Tools like AWS QuickSight, Dashboarding and Metrics.
- Hands on programming experience with multithreaded applications.
- Good to have experience in Database, SQL, messaging queues like Kafka.
- Good to have experience in developing streaming applications eg Spark Streaming, Flink, Storm, etc.
- Good to have experience with AWS and cloud technologies such as S3.
- Experience with caching architectures like Redis, Memcached etc.
- Memory optimization and GC tuning.
- Experience with profiling and performance optimizations.
- Experience with agile development methodologies and DevOps action.
Required Python ,R
work in handling large-scale data engineering pipelines.
Excellent verbal and written communication skills.
Proficient in PowerPoint or other presentation tools.
Ability to work quickly and accurately on multiple projects.
- Participate in planning, implementation of solutions, and transformation programs from legacy system to a cloud-based system
- Work with the team on Analysis, High level and low-level design for solutions using ETL or ELT based solutions and DB services in RDS
- Work closely with the architect and engineers to design systems that effectively reflect business needs, security requirements, and service level requirements
- Own deliverables related to design and implementation
- Own Sprint tasks and drive the team towards the goal while understanding the change and release process defined by the organization.
- Excellent communication skills, particularly those relating to complex findings and presenting them to ensure audience appeal at various levels of the organization
- Ability to integrate research and best practices into problem avoidance and continuous improvement
- Must be able to perform as an effective member in a team-oriented environment, maintain a positive attitude, and achieve desired results while working with minimal supervision
- Minimum of 5+ years of technical work experience in the implementation of complex, large scale, enterprise-wide projects including analysis, design, core development, and delivery
- Minimum of 3+ years of experience with expertise in Informatica ETL, Informatica Power Center, and Informatica Data Quality
- Experience with Informatica MDM tool is good to have
- Should be able to understand the scope of the work and ask for clarifications
- Should have advanced SQL skills. Including complex PL/SQL coding skills
- Knowledge of Agile is plus
- Well-versed with SOAP, Webservice, and REST API.
- Hand on development using Java would be a plus.
Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.
Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.
Antuit.ai is interested in hiring a Principal Data Scientist, this person will facilitate standing up standardization and automation ecosystem for ML product delivery, he will also actively participate in managing implementation, design and tuning of product to meet business needs.
Responsibilities includes, but are not limited to the following:
- Manage and provides technical expertise to the delivery team. This includes recommendation of solution alternatives, identification of risks and managing business expectations.
- Design, build reliable and scalable automated processes for large scale machine learning.
- Use engineering expertise to help design solutions to novel problems in software development, data engineering, and machine learning.
- Collaborate with Business, Technology and Product teams to stand-up MLOps process.
- Apply your experience in making intelligent, forward-thinking, technical decisions to delivery ML ecosystem, including implementing new standards, architecture design, and workflows tools.
- Deep dive into complex algorithmic and product issues in production
- Own metrics and reporting for delivery team.
- Set a clear vision for the team members and working cohesively to attain it.
- Mentor and coach team members
Qualifications and Skills:
- Engineering degree in any stream
- Has at least 7 years of prior experience in building ML driven products/solutions
- Excellent programming skills in any one of the language C++ or Python or Java.
- Hands on experience on open source libraries and frameworks- Tensorflow,Pytorch, MLFlow, KubeFlow, etc.
- Developed and productized large-scale models/algorithms in prior experience
- Can drive fast prototypes/proof of concept in evaluating various technology, frameworks/performance benchmarks.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
- Good verbal, written and presentation skills.
- Ability to learn new skills and technologies.
- 3+ years working with retail or CPG preferred.
- Experience in forecasting and optimization problems, particularly in the CPG / Retail industry preferred.
Information Security Responsibilities
- Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
- Take part in Information Security training and act accordingly while handling information.
- Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).
Antuit.ai is an at-will, equal opportunity employer. We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
1. Expert in deep learning and machine learning techniques,
2. Extremely Good in image/video processing,
3. Have a Good understanding of Linear algebra, Optimization techniques, Statistics and pattern recognition.
Then u r the right fit for this position.
Roles & Responsibilities:
· You will be involved in every part of the project lifecycle, right from identifying the business problem and proposing a solution, to data collection, cleaning, and preprocessing, to training and optimizing ML/DL models and deploying them to production.
· You will often be required to design and execute proof-of-concept projects that can demonstrate business value and build confidence with CloudMoyo’s clients.
· You will be involved in designing and delivering data visualizations that utilize the ML models to generate insights and intuitively deliver business value to CXOs.
Desired Skill Set:
· Candidates should have strong Python coding skills and be comfortable working with various ML/DL frameworks and libraries.
· Hands-on skills and industry experience in one or more of the following areas is necessary:
1) Deep Learning (CNNs/RNNs, Reinforcement Learning, VAEs/GANs)
2) Machine Learning (Regression, Random Forests, SVMs, K-means, ensemble methods)
3) Natural Language Processing
4) Graph Databases (Neo4j, Apache Giraph)
5) Azure Bot Service
6) Azure ML Studio / Azure Cognitive Services
7) Log Analytics with NLP/ML/DL
· Previous experience with data visualization, C# or Azure Cloud platform and services will be a plus.
· Candidates should have excellent communication skills and be highly technical, with the ability to discuss ideas at any level from executive to developer.
· Creative problem-solving, unconventional approaches and a hacker mindset is highly desired.