As an engineer, you will help with the implementation, and launch of many key product features. You will get an opportunity to work on a wide range of technologies (including Spring, AWS Elastic Search, Lambda, ECS, Redis, Spark, Kafka etc.) and apply new technologies for solving problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with skilled and motivated engineers who are already contributing to building high-scale and high-available systems.
If you are looking for an opportunity to work on leading technologies and would like to build product technology that can cater millions of customers inclined towards providing them the best experience, and relish large ownership and diverse technologies, join our team today!
What You'll Do:
- Creating detailed design, working on development and performing code reviews.
- Implementing validation and support activities in line with architecture requirements
- Help the team translate the business requirements into R&D tasks and manage the roadmap of the R&D tasks.
- Designing, building, and implementation of the product; participating in requirements elicitation, validation of architecture, creation and review of high and low level design, assigning and reviewing tasks for product implementation.
- Work closely with product managers, UX designers and end users and integrating software components into a fully functional system
- Ownership of product/feature end-to-end for all phases from the development to the production.
- Ensuring the developed features are scalable and highly available with no quality concerns.
- Work closely with senior engineers for refining the and implementation.
- Management and execution against project plans and delivery commitments.
- Assist directly and indirectly in the continual hiring and development of technical talent.
- Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts.
The ideal candidate is a passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end.
What You'll Need:
- A Bachelor's degree in Computer Science or related technical discipline.
- 2-3+ years of Software Development experience with proficiency in Java or equivalent object-oriented languages, coupled with design and SOA
- Fluency with Java, and Spring is good.
- Experience in JEE applications and frameworks like struts, spring, mybatis, maven, gradle
- Strong knowledge of Data Structures, Algorithms and CS fundamentals.
- Experience in at least one shell scripting language, SQL, SQL Server, PostgreSQL and data modeling skills
- Excellent analytical and reasoning skills
- Ability to learn new domains and deliver output
- Hands on Experience with the core AWS services
- Experience working with CI/CD tools (Jenkins, Spinnaker, Nexus, GitLab, TeamCity, GoCD, etc.)
- Expertise in at least one of the following:
- Kafka, ZeroMQ, AWS SNS/SQS, or equivalent streaming technology
- Distributed cache/in memory data grids like Redis, Hazelcast, Ignite, or Memcached
- Distributed column store databases like Snowflake, Cassandra, or HBase
- Spark, Flink, Beam, or equivalent streaming data processing frameworks
- Proficient with writing and reviewing Python and other object-oriented language(s) are a plus
- Experience building automations and CICD pipelines (integration, testing, deployment)
- Experience with Kubernetes would be a plus.
- Good understanding of working with distributed teams using Agile: Scrum, Kanban
- Strong interpersonal skills as well as excellent written and verbal communication skills
• Attention to detail and quality, and the ability to work well in and across teams
About Vcriate Internet Services Private Limited
About
Company social profiles
Similar jobs
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
Key Responsibilities:
- Rewrite existing APIs in NodeJS.
- Remodel the APIs into Micro services-based architecture.
- Implement a caching layer wherever possible.
- Optimize the API for high performance and scalability.
- Write unit tests for API Testing.
- Automate the code testing and deployment process.
Skills Required:
- At least 3 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds.
- Excellent hands-on experience using MySQL or any other SQL Database.
- Good knowledge of MongoDB or any other NoSQL Database.
- Good knowledge of Redis, its data types, and their use cases.
- Experience with graph-based databases like GraphQL and Neo4j.
- Experience developing and deploying REST APIs.
- Good knowledge of Unit Testing and available Test Frameworks.
- Good understanding of advanced JS libraries and frameworks.
- Experience with Web sockets, Service Workers, and Web Push Notifications.
- Familiar with NodeJS profiling tools.
- Proficient understanding of code versioning tools such as Git.
- Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms.
- Should be a fast learner and a go-getter — without any fear of trying out new things Preferences.
- Experience building a large-scale social or location-based app.
1. Should have worked in Agile methodology and microservices architecture
2. Should have 7+ years of experience in Python and Django framework
3. Should have a good knowledge of DRF
4. Should have knowledge of User Auth (JWT, OAuth2), API Auth, Access Control List, etc.
5. Should have working experience in session management in Django
6. Should have expertise in the Django MVC and uses of templates in frontend
7. Should have working experience in PostgreSQL
8. Should have working experience in the RabbitMQ messaging channel and Celery Analytics
9. Good to have javascript implementation knowledge in Django templates
WHO WILL LOVE THIS JOB?
• Attracted to creativity, innovation, and eagerness to learn
• Alignment to a fast-paced organization and its short-term and long-term goals
• An engaging, open, genuine personality that naturally encourages interaction with individuals at all levels
• Strong value system and sense of ethics • Absolute dedication to premium quality
• Want to build strong core product team capable of developing solutions for complex, industry-first problems.
• Build balance of experience, knowledge, and new learnings
ROLES AND RESPONSIBILITIES?
• Driving the success of the software engineering team at Datamotive.
• Collaborating with senior and peer engineers to prioritize and deliver features on the roadmap.
• Build strong development team with focus on building optimized & usable solutions.
• Research, Design & Develop distributed solution to handle workload mobility across multi & hybrid cloudsr
• Assist in Identifying, Researching & Designing newer features and cloud platform support in areas of disaster recovery, data protection, workload migration etc.
• Assist in building product roadmap.
• Conduct pilot tests to assess the functionality of newly developed programs.
• Front facing customers for product introduction, knowledge transfer, solutioning, bugs triaging etc.
• Assist customers by giving product demos, conducting POCs, trainings etc.
• Manage Datamotive infrastructure, bring innovative automation for optimizing infrastructure usage through monitoring and scripting. • Design test environments to simulate customer behaviours and use cases in VMware vSphere, AWS, GCP, Azure clouds. • Help write technical documentation, generate marketing content like blogs, webinars, seminars etc.
TECHNICAL SKILLS?
• Experience in software development with relevant domain understanding of Data Protection, Disaster Recovery, Ransomware Recovery.
• A strong understanding and demonstrable experience with at least one of the major public cloud platforms (GCP, AWS, Azure or VMware)
• A strong understanding and experience of designing and developing architecture of complex, distributed systems.
• Insights into development of client-server SaaS applications with good breadth across networking, storage, micro-services, and other web technologies.
• Experience of building and leading strong development teams with systems product development background
• Programming knowledge in either of GO Lang, C, C++, Python or Shell script.
• Should be a computer science graduate with strong fundamentals & problem-solving abilities.
• Good understanding of virtualization, storage and cloud platforms like VMware, AWS, GCP, Azure and/or Kubernetes will be preferable
· Core responsibilities to include analyze business requirements and designs for accuracy and completeness. Develops and maintains relevant product.
· BlueYonder is seeking a Senior/Principal Architect in the Data Services department (under Luminate Platform ) to act as one of key technology leaders to build and manage BlueYonder’ s technology assets in the Data Platform and Services.
· This individual will act as a trusted technical advisor and strategic thought leader to the Data Services department. The successful candidate will have the opportunity to lead, participate, guide, and mentor other people in the team on architecture and design in a hands-on manner. You are responsible for technical direction of Data Platform. This position reports to the Global Head, Data Services and will be based in Bangalore, India.
· Core responsibilities to include Architecting and designing (along with counterparts and distinguished Architects) a ground up cloud native (we use Azure) SaaS product in Order management and micro-fulfillment
· The team currently comprises of 60+ global associates across US, India (COE) and UK and is expected to grow rapidly. The incumbent will need to have leadership qualities to also mentor junior and mid-level software associates in our team. This person will lead the Data platform architecture – Streaming, Bulk with Snowflake/Elastic Search/other tools
Our current technical environment:
· Software: Java, Springboot, Gradle, GIT, Hibernate, Rest API, OAuth , Snowflake
· • Application Architecture: Scalable, Resilient, event driven, secure multi-tenant Microservices architecture
· • Cloud Architecture: MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD)
· Frameworks/Others: Kubernetes, Kafka, Elasticsearch, Spark, NOSQL, RDBMS, Springboot, Gradle GIT, Ignite
Developed in formal collaboration with the University of Cambridge in May 2000, HeyMath! is an Ed-Tech company whose mission is to Raise the Game in Maths for school systems around the world. We do this using technology to deliver engaging teaching methodologies and personalised learning paths for students. HeyMath! has been successfully adopted by CBSE schools since 2004, with positive outcomes for the entire ecosystem.
Check us out at www.heymath.com
We plan to work mainly from home in 2021 and the virtual office atmosphere is collegiate, informal and friendly, with small high-impact teams making a difference to customers.
What we are looking for:
Experience in building and re-engineering cloud based solutions on AWS.
Strong knowledge of Object Oriented Programming(OOPS) and design patterns is a must.
Hands-on development on Spring MVC framework.
Experience working on Java 8 or above.
Must have very good knowledge of RDBMS such as MySQL and performance tuning of the same.
Exposure to server-side and client-side caching mechanisms.
Ability to debug the applications and provide instant workable solutions.
Experience working on Elastic Search / Kafka / Kubernetes or all is a nice to have.
Neokred is a FinTech company based in Bangalore and an ISO 9001 | 27001 & 20000-1 and PCIDSS certified firm in Information and Data security. The company builds Consumer Tech for Financial Infrastructure stack to provide curated versions of embedded banking in the payment ecosystem. We've created a platform which enables Corporates, Banks, FinTech’s, Retail Companies, and Start-ups to launch their own banking services or financial products, such as issuance of co-branded cards, facilitating lending, and virtual bank accounts and KYC for their customers or employees with the help of low code plug and play technology stack.
BRIEF DESCRIPTION OF THE ROLE:
We are looking for an analytical, result driven Senior Java Developer who will use his or her understanding of programming languages and tools to build and analyse codes, formulate more efficient processes, solve problems, and create more seamless experience for users.
Your KRAs will include the following:
- You will design, build, and own APIs and Services, which will be the core of the product.
- You will participate in continuing education and training to remain current on best practices, learn new programming languages and better assist other team members.
- You will part of developing ideas for new programs, products or features by monitoring industry developments and trends.
- You will have to take lead on projects, compile and analyse data, processes, and codes to troubleshoot problems and identify areas of improvement.
YOU SHOULD POSSESS:
- Minimum 4+ years of experience with Proficient understanding of Java, Hibernate, Springboot.
- luency in JAVA, Operating System may be required and Experience on Database such as MySQL or Postgre SQL.
- Proficiency with Springboot, Spring Security and Hibernate.
- Strong understanding of Computer Science Fundamentals, Data Structures and Algorithms, SOLID Design Principles and REST Patterns.
- Focus on efficiency, user experience and process improvement. • Excellent project and time management skills.
- Strong problem solving and communication skills.
- Ability to work independently or with a group.
About the role
Checking quality is one of the most important tasks at Anakin. Our clients are pricing their products based on our data, and minor errors on our end can lead to our client's losses of millions of dollars. You would work with multiple tools and with people across various departments to ensure the accuracy of the data being crawled. You would setup manual and automated processes and make sure they run to ensure the highest possible data quality.
You are the engineer other engineers can count on. You embrace every problem with enthusiasm. You remove hurdles, are a self-starter, team player. You have the hunger to venture into unknown areas and make the system work.
Your Responsibilities would be to:
- Understand customer web scraping and data requirements; translate these into test approaches that include exploratory manual/visual testing and any additional automated tests deemed appropriate
- Take ownership of the end-to-end QA process in newly-started projects
- Draw conclusions about data quality by producing basic descriptive statistics, summaries, and visualisations
- Proactively suggest and take ownership of improvements to QA processes and methodologies by employing other technologies and tools, including but not limited to: browser add-ons, Excel add-ons, UI-based test automation tools etc.
- Ensure that project requirements are testable; work with project managers and/or clients to clarify ambiguities before QA begins
- Drive innovation and advanced validation and analytics techniques to ensure data quality for Anakin's customers
- Optimize data quality codebases and systems to monitor the Anakin family of app crawlers
- Configure and optimize the automated and manual testing and deployment systems used to check the quality of billions of data points of over 1000+ crawlers across the company
- Analyze data and bugs that require in-depth investigations
- Interface directly with external customers including managing relationships and steering requirements
Basic Qualifications:
- 2+ years of experience as a backend or a full-stack software engineer
- Web scraping experience with Python or Node.js
- 2+ years of experience with AWS services such as EC2, S3, Lambda, etc.
- Should have managed a team of software engineers
- Should be paranoid about data quality
Preferred Skills and Experience:
- Deep experience with network debugging across all OSI layers (Wireshark)
- Knowledge of networks or/and cybersecurity
- Broad understanding of the landscape of software engineering design patterns and principles
- Ability to work quickly and accurately in a highly stressful environment during removing bugs in run-time within minutes
- Excellent communicator, both written and verbal
Additional Requirements:
- Must be available to work extended hours and weekends when needed to meet critical deadlines
- Must have an aversion to politics and BS. Should let his/her work speak for him/her.
- Must be comfortable with uncertainty. In almost all the cases, your job will be to figure it out.
- Must not be bounded to comfort zone. Often, you will need to challenge yourself to go above and beyond.
ABOUT THE JOB
We are looking for a Senior Software Engineer to join our team. We believe in giving engineers responsibility, not tasks. Our goal is to motivate and challenge people to do their best work. To do that, we have a very fluid structure and give people flexibility to work on projects that they enjoy the most. This develops more capable engineers, and keeps everyone engaged and happy.
Responsibilities
- Design, develop, test, deploy, maintain and improve the software.
- Manage individual projects priorities, deadlines and deliverables with your technical expertise.
- Identify and solve for bottlenecks within our software stack.
ABOUT YOU
Rubrik Software Engineers are self-starters, driven, and can manage themselves. Bottom line, if you have a limitless drive and like to win, we want to talk to you - come make history!
- Bachelor’s or Master’s degree or equivalent in computer science or related field
- 8+ years of relevant work experience,
- Proficiency in one or more general purpose programming languages like Java, C/C++, Scala, Python
- Experience with Google Cloud Platform/AWS/Azure or other public cloud technologies is a plus
- Experience working with two or more from the following: Unix/Linux environments, Windows environments, distributed systems, networking, developing large software systems, file systems, storage systems, hypervisors, databases and/or security software development.
ABOUT THE TEAM
Galactus team owns the end to end development of Rubrik’s data management suite for commercial public clouds - AWS, Azure and GCP. Our objective is to bring the simplicity of Rubrik’s on-prem data protection and management offerings to our customers in the cloud through a solution designed from ground up to be highly scalable, available & secure and yet optimized to minimize our customer’s cloud costs. We achieve this by taking a cloud-first approach to design - leveraging technologies built for the scale, elasticity and automation needs of the cloud; and deploying on our brand new SaaS platform called Polaris.
Recently we have :-
- Built policy based backup and recovery for Virtual Machines in AWS, Azure and GCP and managed databases in AWS.
- Built features like granular file recovery leveraging managed Kubernetes Service in AWS for elastic compute
ABOUT RUBRIK
Rubrik is one of the fastest-growing companies in Silicon Valley, revolutionising data protection, and management in the emerging multi-cloud world. We are the leader in cloud data management, delivering a single platform to manage and protect data in the cloud, at the edge, and on-premises. Enterprises choose Rubrik to simplify backup and recovery, accelerate cloud adoption, enable automation at scale, and secure against cyberthreats. We’ve been recognized as a Forbes Cloud 100 Company two years in a row and as a LinkedIn Top 10 startup.
Rubrik provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Rubrik complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Requires a bachelor's degree in area of specialty and experience in the field or in a related area. Familiar with standard concepts, practices, and procedures within a particular field. Relies on experience and judgment to plan and accomplish goals. Performs a variety of tasks. A degree of creativity and latitude is required. Typically reports to a supervisor or manager.
Designs, develops, and implements web-based Java applications to support business requirements. Follows approved life cycle methodologies, creates design documents, and performs program coding and testing. Resolves technical issues through debugging, research, and investigation.
Additional Job Details:
Strong in Java, Spring, Spring Boot, REST and developing MicroServices.
Knowledge or experience , Cassandra preferred
Knowledge or experience on Kafka
Good to have but not must
Good to know:
Reporting tools like Splunk/Grafana
Protobuf
Python/Ruby