Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team
About Dremio
Similar jobs
Wissen Technology is now hiring for Java Developer with hands on experience in Core Java, multithreading, algorithms, data structure and SQL skills .
Required Skills:
- Experience - 5 to 10 years.
- Experience in Core Java 5.0 and above, CXF, Spring.
- Extensive experience in developing enterprise-scale n-tier applications for financial domain. Should possess good architectural knowledge and be aware of enterprise application design patterns.
- Should have the ability to analyze, design, develop and test complex, low-latency client-facing applications.
- Good development experience with RDBMS, preferably Sybase database.
- Good knowledge of multi-threading and high-volume server-side development.
- Experience in sales and trading platforms in investment banking/capital markets.
- Basic working knowledge of Unix/Linux.
- Excellent problem solving and coding skills in Java.
- Strong interpersonal, communication and analytical skills.
- Should have the ability to express their design ideas and thoughts.
Have you streamed a program on Disney+, watched your favorite binge-worthy series on Peacock or cheered your favorite team on during the World Cup from one of the 20 top streaming platforms around the globe? If the answer is yes, you’ve already benefitted from Conviva technology, helping the world’s leading streaming publishers deliver exceptional streaming experiences and grow their businesses.
Conviva is the only global streaming analytics platform for big data that collects, standardizes, and puts trillions of cross-screen, streaming data points in context, in real time. The Conviva platform provides comprehensive, continuous, census-level measurement through real-time, server side sessionization at unprecedented scale. If this sounds important, it is! We measure a global footprint of more than 500 million unique viewers in 180 countries watching 220 billion streams per year across 3 billion applications streaming on devices. With Conviva, customers get a unique level of actionability and scale from continuous streaming measurement insights and benchmarking across every stream, every screen, every second.
As Conviva is expanding, we are building products providing deep insights into end user experience for our customers.
Platform and TLB Team
The vision for the TLB team is to build data processing software that works on terabytes of streaming data in real time. Engineer the next-gen Spark-like system for in-memory computation of large time-series dataset’s – both Spark-like backend infra and library based programming model. Build horizontally and vertically scalable system that analyses trillions of events per day within sub second latencies. Utilize the latest and greatest of big data technologies to build solutions for use-cases across multiple verticals. Lead technology innovation and advancement that will have big business impact for years to come. Be part of a worldwide team building software using the latest technologies and the best of software development tools and processes.
What You’ll Do
This is an individual contributor position. Expectations will be on the below lines:
- Design, build and maintain the stream processing, and time-series analysis system which is at the heart of Conviva's products
- Responsible for the architecture of the Conviva platform
- Build features, enhancements, new services, and bug fixing in Scala and Java on a Jenkins-based pipeline to be deployed as Docker containers on Kubernetes
- Own the entire lifecycle of your microservice including early specs, design, technology choice, development, unit-testing, integration-testing, documentation, deployment, troubleshooting, enhancements etc.
- Lead a team to develop a feature or parts of the product
- Adhere to the Agile model of software development to plan, estimate, and ship per business priority
What you need to succeed
- 9+ years of work experience in software development of data processing products.
- Engineering degree in software or equivalent from a premier institute.
- Excellent knowledge of fundamentals of Computer Science like algorithms and data structures. Hands-on with functional programming and know-how of its concepts
- Excellent programming and debugging skills on the JVM. Proficient in writing code in Scala/Java/Rust/Haskell/Erlang that is reliable, maintainable, secure, and performant
- Experience with big data technologies like Spark, Flink, Kafka, Druid, HDFS, etc.
- Deep understanding of distributed systems concepts and scalability challenges including multi-threading, concurrency, sharding, partitioning, etc.
- Experience/knowledge of Akka/Lagom framework and/or stream processing technologies like RxJava or Project Reactor will be a big plus. Knowledge of design patterns like event-streaming, CQRS and DDD to build large microservice architectures will be a big plus
- Excellent communication skills. Willingness to work under pressure. Hunger to learn and succeed. Comfortable with ambiguity. Comfortable with complexity
Underpinning the Conviva platform is a rich history of innovation. More than 60 patents represent award-winning technologies and standards, including first-of-its kind-innovations like time-state analytics and AI-automated data modeling, that surfaces actionable insights. By understanding real-world human experiences and having the ability to act within seconds of observation, our customers can solve business-critical issues and focus on growing their businesses ahead of the competition. Examples of the brands Conviva has helped fuel streaming growth for include DAZN, Disney+, HBO, Hulu, NBCUniversal, Paramount+, Peacock, Sky, Sling TV, Univision, and Warner Bros Discovery.
Privately held, Conviva is headquartered in Silicon Valley, California with offices and people around the globe. For more information, visit us at www.conviva.com. Join us to help extend our leadership position in big data streaming analytics to new audiences and markets!
Must Have: •At least 6+ years in web services development and solid understanding of web technologies in JAVA •Strong Expertise in building & deploying application on any of the major cloud platforms (GCP, AWS, Azure) •Strong expertise with Docker/Kubernetes •Working knowledge of building Micro Services, RESTful web Services using any framework (Spring Boot, JaxRS, Jersey) •Strong Expertise of writing JUnits & How to configure it through maven •Good understanding of NoSQL databases and have worked upon any one of them (HBase, Cassandra, Big query, Mongo) •Good understanding of Message Queues and have worked upon any one of them (Kafka, RabbitMQ, Pub Sub) •Good understanding of Maven, Git •Good understanding of Jenkins, CI/CD architecture •Good understanding of Programming Algorithms and Data Structures •Experience with BDD & Cucumber
Good to Have: •Monitoring experience – Stackdriver or Prometheus or Azure equivalent •Operational readiness – SLI/SLO, DevOps experience •Service mesh – ex: Istio •Any OpenShift experience •Knowledge on Graph technology •Have worked upon any of the big data technologies •Have worked upon tools like white source, Veracode... •Knowledge of Python and Angular •Integration tests using BDD Framework (Cucumber) •Good understanding of Streaming technologies and processing engines (Dataflow, Flink, Spark) •Knowledge of VSTS |
Key Skills :
MongoDB, Django, Flask, Flask, Nosql, Javascript, Python, Git
Experience Required : 3 - 6 Yrs
Job Description
We are looking for a ROR developer. If you're a creative problem solver who is eager to develop amazing products and hungry for a new adventure, a word class workplace is waiting for you.
- Production experience in Ruby.
- A completed technical degree in Computer Science or any related fields.
- 3+ years of professional product development experience.
- Being comfortable with microservices architectures, API-based
- You are a pragmatic programmer who understands what is needed to get things done.
- Problem solving and collaborative mindset.
- Experience working with DevOps (Docker, Kubernetes, Terraform).
- Experience with AWS (RDS, DDB, Lambda, CW, EC2, SQS, SNS, Cognito, Kinesis).
- Experience with performance improvements (Caching Techniques, SQL Query Optimization, Performance monitoring and profiling.
- Deep understanding of service-oriented and microservices architectural patterns, troubleshooting methods and best practices.
- Takes end to end ownership of the development and operation of complete features.
Job description
- Lead design, development, implementation and maintenance of applications & back-end services demonstrating service-oriented architecture.
- Design, build, test, and maintain scalable APIs, services, and systems within the platform.
- Choose the right Data Structures, tools, and tech stacks and do high-level design with guidance.
- Build, develop, mentor, review code and coach junior team members.
- Extensive programming experience with cross-platform development: Java/SpringBoot, Javascript/Node.js, Express.js or Python
- Extensive knowledge of ElasticSearch, MongoDB or Cassandra, Redis, SQS and data streaming (Spark, Flink, Kafka streams, storm, etc.).
- Well versed in Kafka Understanding cloud native technologies such as Docker and Kubernetes, capable to cover full development lifecycle including CI/CD
- Experience in use of source code management system like GIT, bitbucket and build tools like ant, maven, Gradle or make.
- Take great pride in Code quality and developer productivity.
- Put in Microservcies architecture in place that paves road for scalability, efficiency, observability, and availability.
- Build (and open source) data processing, storage and fetch systems at the petabyte scale with the lowest cost/GB while still responding in milliseconds at the 99th percentile.
- Write algorithms and services to influence personalisation and recommendation from a real-time recommendation engine for both home feed to surface most viral videos + video e-commerce
- Build machine learning pipelines using Kinesis,Spark/Flink/TensorFlow etc
- Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture
- Proven experience in handling large infrastructure and distributed systems Liaison with Product Management, DevOps, QA, Client and other teams Your Experience Across The Years in the Roles You’ve Played
Requirement:
- Have total or more 7 - 9 years of experience with 2-3 years in a startup. Have B.Tech or M.Tech or equivalent academic qualification from premier institute. Experience in Product companies working on Internet-scale applications is preferred
- Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
- Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
We Value Engineers Who Are :
- Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
- Obsessed with Quality: Your Production code just works & scales linearly
- Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
- Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability
Chingari Benefits
The glory. Almost too much responsibility.
A fun-life balance
A ticket on our rocket ship to the moon. ��
We are looking for an experienced Back-end developer to join our IT team! As a Back-end Developer, you will be responsible for the server-side web application logic as well as for the integration of the front-end part.
- Be involved and participate in the overall application lifecycle
- Main focus on coding and debugging
- Collaborate with Front-end developers
- Define and communicate technical and design requirements
- Provide training, help and support to other team members
- Build high-quality reusable code that can be used in thew future
- Develop functional and sustainable web applications with clean codes
- Troubleshoot and debug applications
- Learn about new technologies
- Stay up to date with current best practices
- Conduct UI tests and optimize performance
- Manage cutting-edge technologies to improve applications
- Collaborate with multidisciplinary team of designers, developers and system administrators
- Follow new and emerging technologies
- 3-5 Years of experience in Backend Development.
- Must have experience in Python (FLASK framework).
- Have a Deep understanding of how RESTful APIs work.
- Familiar with various design and architectural patterns that can work at scale.
- Sound knowledge of NoSQL/SQL Databases (Mongo DB preferred).
- Strong experience with at-Cloud technology, preferably AWS or GCP, or Azure.
- Core experience in developing complex backend systems.
- Communicating complex technical concepts to both technical and non-technical audiences.
- Passionate about application scalability, availability, reliability, and security.
Requirements:
- Academic degree (BE / MCA) with 3-10 years of experience in back-end Development.
- Strong knowledge of OOPS concepts, Analyzing, Designing, Development and Unit testing
- Scala technologies, AKKA, REST Webservices, SOAP, Jackson JSON API, JUnit, Mockito, Maven
- Hands-on experience with Play framework
- Familiarity with Microservice Architecture
- Experience working with Apache Tomcat server or TomEE
- Experience working with SQL databases (MySQL, PostgreSQL, Cassandra), writing custom queries, procedures, and designing schemas.
- Good to have front end experience (JavaScript, Angular JS/ React JS)