This position is part of the Analytics product Engineering Productivity team. Engineering Productivity works closely with other engineers, data scientists, product teams, and many others to not only increase our systems’ scalability and reliability, but also enable the rapid development of new feature code for our customers. As a software engineer, you will play an integral role in building and maintaining anomaly detection integrations, automating engineering process workflows, assisting with monitoring and instrumentation governance, design, and implementation, and much more. Roles and Responsibilities: - Object oriented programming experience in Python - Experience with stream-based passed processing models, distributed streaming platforms like Kafka or control theory - Experience with SQL (MySQL, Oracle, or PostgreSQL) - Solid Computer Science fundamentals with regards to data structures, algorithms, time complexity, etc. Key Competencies and Skills: - Solid understanding of statistics and probability - Experience optimizing and debugging highly performant Python applications is mandatory - Experience developing and scaling RESTful web services - Former work with column stores (Vertica) and NoSQL (Redis, Aerospike) -Streaming technology:Kafka/Kubernetes experience is mandatory Education and Qualifications: BA/BS degree and 4+ years of experience OR MS degree and 2+ years of experience in software engineering (Degree in Computer Science or related field preferred) OR equivalent experience in software development
Key skill set : Apache NiFi, Kafka Connect (Confluent), Sqoop, Kylo, Spark, Druid, Presto, RESTful services, Lambda / Kappa architectures Responsibilities : - Build a scalable, reliable, operable and performant big data platform for both streaming and batch analytics - Design and implement data aggregation, cleansing and transformation layers Skills : - Around 4+ years of hands-on experience designing and operating large data platforms - Experience in Big data Ingestion, Transformation and stream/batch processing technologies using Apache NiFi, Apache Kafka, Kafka Connect (Confluent), Sqoop, Spark, Storm, Hive etc; - Experience in designing and building streaming data platforms in Lambda, Kappa architectures - Should have working experience in one of NoSQL, OLAP data stores like Druid, Cassandra, Elasticsearch, Pinot etc; - Experience in one of data warehousing tools like RedShift, BigQuery, Azure SQL Data Warehouse - Exposure to other Data Ingestion, Data Lake and querying frameworks like Marmaray, Kylo, Drill, Presto - Experience in designing and consuming microservices - Exposure to security and governance tools like Apache Ranger, Apache Atlas - Any contributions to open source projects a plus - Experience in performance benchmarks will be a plus
Responsibilities: Build real-time and batch analytics platform for analytics & machine-learning. Design, propose and develop solutions keeping the growing scale & business requirements in mind. As an integral part of the Data Engineering team, be involved in the entire development lifecycle from conceptualisation to architecture to coding to unit testing. Help us design the Data Model for our data warehouse and other data engineering solutions. Requirements: Deep understanding of real-time as well as batch processing big data solutions (Spark, Storm, Kafka, KSql, Flink, MapReduce, Yarn, Hive, HDFS, Pig etc). Extensive experience developing applications that work with NoSQL stores (e.g.,Elastic Search, HBase, Cassandra, MongoDB). Understands Data very well and has fair Data Modelling experience. Proven programming experience in Java or Scala. Experience in gathering and processing raw data at scale including writing scripts, web scraping, calling APIs, writing SQL queries, etc. Experience in cloud based data stores like Redshift and Big Query is an advantage. Previous experience in a high-growth tech startup would be an advantage.
Hands-on programming and technical design skills with a passion for learning new technologies Experience of building highly scalable, robust, and fault-tolerant services 3+ years of experience of designing and developing software systems or services Good understanding of REST APIs and the web in general. Ability to build a feature from scratch & drive it to completion. Working experience with AWS Knowledge of designing microservices Knowledge of search platform like Elastic Search, Solr etc Knowledge of messaging technology such as Kafka or rabbitmq Startup experience is a strong plus Experience of Python is a strong plus Critical thinking is a plus.
Please apply if and only if you enjoy engineering, wish to write a lot of code, wish to do a lot of hands-on Python experimentation, already have in-depth knowledge of deep learning. This position is strictly for people having knowledge various Neural networks and can customize the neural network and not for people who have experience in downloading various AI/ML code. This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 2 year in the industry.
Products@DataWeave: We, the Products team at DataWeave, build data products that provide timely insights that are readily consumable and actionable, at scale. Our underpinnings are: scale, impact, engagement, and visibility. We help businesses take data driven decisions everyday. We also give them insights for long term strategy. We are focused on creating value for our customers and help them succeed. How we work It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Read more on Become a DataWeaver What do we offer? - Opportunity to work on some of the most compelling data products that we are building for online retailers and brands. - Ability to see the impact of your work and the value you are adding to our customers almost immediately. - Opportunity to work on a variety of challenging problems and technologies to figure out what really excites you. - A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours. - Learning opportunities with courses, trainings, and tech conferences. Mentorship from seniors in the team. - Last but not the least, competitive salary packages and fast paced growth opportunities. Roles and Responsibilities: ● Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality ● Build robust RESTful APIs that serve data and insights to DataWeave and other products ● Design user interaction workflows on our products and integrating them with data APIs ● Help stabilize and scale our existing systems. Help design the next generation systems. ● Scale our back end data and analytics pipeline to handle increasingly large amounts of data. ● Work closely with the Head of Products and UX designers to understand the product vision and design philosophy ● Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns. ● Constantly think scale, think automation. Measure everything. Optimize proactively. ● Be a tech thought leader. Add passion and vibrancy to the team. Push the envelope. Skills and Requirements: ● 5-7 years of experience building and scaling APIs and web applications. ● Experience building and managing large scale data/analytics systems. ● Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices. ● Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python. ● Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on. ● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’. ● Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic. ● Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ. ● Use the command line like a pro. Be proficient in Git and other essential software development tools. ● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus. ● Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc. ● Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies. ● Working knowledge linux server administration as well as the AWS ecosystem is desirable. ● It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Responsibilities Work with the team to create backend services by translating application storyboards and use cases into functional applications Design, build and maintain efficient, reusable, and reliable Python code Ensure the best possible performance, quality, and responsiveness of the applications Identify bottlenecks and bugs and devise solutions to these problems Help maintain code quality, organization, and automatization Skills Deep understanding of how RESTful APIs work Familiar with various design and architectural patterns Sound knowledge of Databases MongoDB is a must, Work-experience in Python, with knowledge of Flask Framework. Knowledge of user authentication and authorization between multiple systems, servers, and environments Strong unit test and debugging skills Ability to communicate complex technical concepts to both technical and non-technical audiences Experience with Queue based streaming systems like Kafka, Celery Understanding of fundamental design principles behind a scalable application Able to create database schemas that represent and support business processes Proficient understanding of code versioning tools, such as Git We expect an entrepreneurial mindset, someone who is not afraid to take on new challenges every day and who considers the product as his own by taking complete ownership of it.