
Blackbuck - Software Development Engineer II-Data Structure/Algorithm

Similar jobs

- Key Skills:
- Expert Proficiency in anyone the below programming language - Node JS, PHP or GoLang
- Expert Computer Science fundamentals like Data Structures, Algorithms, Time Complexity, and others
- Strong Microservices, REST API, Git source control, CI/CD, and other latest technology trends
- Strong System Design, DB Design proficiency
- Strong knowledge in Design Patterns, best software development practices
- Good exposure to working in Open Source stack, E-Commerce, or Fintech domain.
Responsibilities for Staff Engineer role:
- Having experience in Java along with Springboot, Micorservices, RDBMS
- Experience required: 6yrs to 8 yrs.

Preferred Skills:
We want to really emphasize Spring Boot (2+ years although 1+ if candidate particularly strong) Using Redis as a caching technology with Spring Boot would be a strong plus
Using Redisson (a particular java client library that can be easily configured with Spring Boot) would be a strong plus
Knowledge of event based messaging systems (Amazon SNS, Amazon MQ, or Kafka (in AWS) Data Cleaning tools and techniques in CSV and Excel
Strong Knowledge of Spring Boot Dependency Injection and Configuration
Experience with APIs for popular e-commerce platforms (Magento, Shopify, Big Commerce, etc.)
SDLC (Software Development Lifecycle) Tools in the context of AWS. (Tools classified under DevOps)
Experience with managing AWS EC2 VM instances and using AWS managed Services (like S3, MySQL, VPC/Networking, Lambda, etc)
Performance Analysis Tools (Code Profiling) on Java VM and particularly Spring Boot
Experience in the development of Workflow or Business Process ApplicationsNice to Have:
Experience with Cassandra or MongoDB with Spring Boot
Horizontal Scaling with Spring Boot (considerations running multiple instances of Spring Boot instances)
Experience with placing Spring Boot applications in Docker/Kubernetes container ecosystems (especially in AWS)
Search technologies such as Lucene/Solr
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team
We are looking for an experienced engineer with superb technical skills. You will primarily be responsible for architecting and building large scale data pipelines that delivers AI and Analytical solutions to our customers. The right candidate will enthusiastically take ownership in developing and managing a continuously improving, robust, scalable software solutions. The successful candidate will be curious, creative, ambitious, self motivated, flexible, and have a bias towards taking action. As part of the early engineering team, you will have a chance to make a measurable impact in future of Thinkdeeply as well as having a significant amount of responsibility.
Although your primary responsibilities will be around back-end work, we prize individuals who are willing to step in and contribute to other areas including automation, tooling, and management applications. Experience with or desire to learn Machine Learning a plus.
Experience
12+ Years
Location
Hyderabad
Skills
Bachelors/Masters/Phd in CS or equivalent industry experience
10+ years of industry experience in java related frameworks such as Spring and/or Typesafe
Experience with scripting languages. Python experience highly desirable. 5+ Industry experience in python
Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
Demonstrated expertise of building and shipping cloud native applications
Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
Experience in API development using Swagger
Strong expertise with containerization technologies including kubernetes, docker-compose
Experience with cloud platform services such as AWS, Azure or GCP.
Implementing automated testing platforms and unit tests
Proficient understanding of code versioning tools, such as Git
Familiarity with continuous integration, Jenkins
Responsibilities
Architect, Design and Implement Large scale data processing pipelines
Design and Implement APIs
Assist in dev ops operations
Identify performance bottlenecks and bugs, and devise solutions to these problems
Help maintain code quality, organization, and documentation
Communicate with stakeholders regarding various aspects of solution.
Mentor team members on best practices


About Aviso:
Aviso is the AI Compass that guides Sales and Go-to-Market teams to close more deals, accelerate revenue growth, and find their True North.
We are a global company with offices in Redwood City, San Francisco, Hyderabad, and Bangalore. Our customers are innovative leaders in their market. We are proud to count Dell, Honeywell, MongoDB, Glassdoor, Splunk, FireEye, and RingCentral as our customers, helping them drive revenue, achieve goals faster, and win in bold new frontiers.
Aviso is backed by Storm Ventures, Shasta Ventures, Scale Venture Partners and leading Silicon Valley technology investors
What you will be doing:
● Write effective, scalable, extensible and testable code
● Develop back-end components to improve responsiveness and overall performance of the application.
● Develop database layer with optimised queries
● Detect the bottlenecks in legacy code and provide feasible solution to make things better.
● Implement security and data protection solutions
● Coordinate with internal teams to understand user requirements and provide technical solutions
● Manage individual project priorities, deadlines, and deliverables.
What you bring:
● Bachelor's Degree or equivalent in Computer Science with good academic record
● 3+ years of hands on experience in Python and Django
● Experience in the development of highly scalable applications
● A high degree of motivation to learn new technologies, tools and libraries
● Experience developing in Unix/Linux environment
● Experience in the development of REST API
● Basic understanding on databases like MongoDB and Postgres
● Good knowledge of Data Structures and Algorithms
● Understand the deployment aspects after developing the feature
● Excellent interpersonal skills, written and verbal communication skills, and
professionalism
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card



Job Description:
Requirements:
- BS in Computer Science, Computer Engineering, Electrical Engineering, Mathematics or a closely related computer technical field with 3+ years experience programming with at least one of the following languages:Java, C++, C#, Python, Go, or Perl;
OR
- MS in Computer Science, Computer Engineering, Electrical Engineering, Mathematics or a closely related computer technical field 2+ years experience programming with at least one of the following languages: Java, C++, C#, Python, Go, or Perl.
ALSO
- Minimum 3 years of experience building applications using at least one of the following: web application technologies including: HTML, CSS, or Javascript; OR Databases, for example: Mysql, Mongo, ora similar program; OR a collection of systems connected and communicating via a network connection
- Minimum 1 year of experience mentoring more junior Engineers
- Significant experience with large scale, high-performance systems



