Must - Python, SQL, NoSQL, Redshift, Redis, MongoDB, Spark Good to have - Scala, R, Java Responsibility : - Strong Knowledge in Spark & Scala/Java/Python(more than one) - Must have working knowledge of at least one SQL DB (MySQL, Postgres, Redshift) and NoSql DB (Cassandra, Mongo Db, Redis, DynamoDB etc.). - Knowledge on Kafka, Data streams is added advantage. - Designing, building, installing, configuring, monitoring and supporting Batch/ Stream data-processing jobs/ scripts. Experience: - Data backup/ extraction/ processing/ preparation methods and storage frameworks/ structures (full as well as incremental) - Pattern matching and data manipulation/ filtering operators in shell/ SQL/ NoSQL is essential knowledge - Data mining, data analytics and prototyping data pipeline experience is added advantage
Requirement - Backend DeveloperExperience: 2 - 3 YearsLocation: BangaloreSalary: 8 LakhsQualification: AnyIndustry: AnyGender: AnySkills required: C#,SQL Server ,WEB-API and My-SQL knowledge but optional.______________________________________________
About Setu“We build too many walls and not enoughbridges”— Isaac NewtonIndia’s economic infrastructure needs a complete overhaul. Today our digital economy rides on rickety monolithic legacy systems built in the mid-90s. It takes months to do a simple bank integration and launch a fintech product. This inefficiency cannot continue.If our economy has to grow, we need killer API infra for financial services that rivals the likes of Stripe, and AWS. We are fixing this for good, by abstracting away the complexity of working with legacy financial infrastructure and replacing it with a developer platform that offers clean, high-performance API bundles to embed finance into any application. So that developers can do what they do best, build amazing products that improve people’s lives.Importance of the roleWe are a small, diverse, agile and high-performance, engineering team. Toaccelerate our and our partner’s efforts we need more people who can helpus—become better through peer learning, build new products and toolscreatively, take ownership of things to be done and decrease our time to market.A mission-critical role.Description of the roleThe backend engineer role involves owning the end-to-end development ofproducts and product features in various capacities. This includes:● Architecting — figuring out all levels of designing and structuring systems,deciding on optimal levels of abstraction and future-proofing, patterns oforchestration of components and finally planning its execution.● Implementation — should be able to structure and write code like prose forothers to read, design and adhere to common principles and patterns that makeseveryone better at their jobs, author documentation about that code for variousconsumers including fellow team members, business and our ultimate consumers- developers!● Orchestration and integration — interact with the larger engineering team forintegrating your work into the ecosystem, help others in building over the basethat you provide, help in orchestrating your work into CI/CD, testing, QA andautomation pipelines.● Mentorship — we all love to learn and grow as engineers and human beings andwe believe in synchronous growth as a team fueled by each and every individual'spersonal abilities, specializations, and interests. To enable this you'll be requiredto mentor others by sharing your expertise in various forms and be subject to bementored by others and thus maintain a continuous learning culture.● Product Thinking — We believe that tech teams are the authority when it comesto building products. Instead of having product management as a separatefunction, we prefer to include it with the responsibilities of the buildersthemselves. This ensures minimal information loss and maximize control wherethe builder takes the center stage in their work.Why join us at Setu?● We will spare no efforts to ensure that Setu empowers you to do themost important and impactful work of your career● We are a diverse set of professionals across functions who takeimmense pride in the work we do● We are first principles thinkers, obsessed with modularity and detailorientation● We believe deeply in the growth mindset of constant learning andimprovement. And we have a library to prove it!● We leverage all the latest tools available open-source and otherwise.We don’t need to reinvent the wheel● We have kick-ass benefits like comprehensive health insurance,extraordinary coffee, and a beautiful office with lots of solid wood andnatural lightJoin us if you want to be part of a company that’s building infrastructure that willdirectly impact financial inclusion and improve millions of lives. No cashbacks, nogrowth-hacks, no bullshit. Just an audacious mission, and an obsession withcraftsmanship in code.
Products@DataWeave: We, the Products team at DataWeave, build data products that provide timely insights that are readily consumable and actionable, at scale. Our underpinnings are: scale, impact, engagement, and visibility. We help businesses take data driven decisions everyday. We also give them insights for long term strategy. We are focused on creating value for our customers and help them succeed. How we work It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Read more on Become a DataWeaver What do we offer? - Opportunity to work on some of the most compelling data products that we are building for online retailers and brands. - Ability to see the impact of your work and the value you are adding to our customers almost immediately. - Opportunity to work on a variety of challenging problems and technologies to figure out what really excites you. - A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours. - Learning opportunities with courses, trainings, and tech conferences. Mentorship from seniors in the team. - Last but not the least, competitive salary packages and fast paced growth opportunities. Roles and Responsibilities: ● Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality ● Build robust RESTful APIs that serve data and insights to DataWeave and other products ● Design user interaction workflows on our products and integrating them with data APIs ● Help stabilize and scale our existing systems. Help design the next generation systems. ● Scale our back end data and analytics pipeline to handle increasingly large amounts of data. ● Work closely with the Head of Products and UX designers to understand the product vision and design philosophy ● Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns. ● Constantly think scale, think automation. Measure everything. Optimize proactively. ● Be a tech thought leader. Add passion and vibrancy to the team. Push the envelope. Skills and Requirements: ● 5-7 years of experience building and scaling APIs and web applications. ● Experience building and managing large scale data/analytics systems. ● Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices. ● Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python. ● Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on. ● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’. ● Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic. ● Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ. ● Use the command line like a pro. Be proficient in Git and other essential software development tools. ● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus. ● Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc. ● Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies. ● Working knowledge linux server administration as well as the AWS ecosystem is desirable. ● It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Roles & Responsibilities Should be a thought leader who is able to define Technical Architecture for the next generation cloud native Enterprise Archive Application Should be a hands-on contributor, able to do quick POCs and evaluate technologies Should be able to communicate well and articulate designs to all stakeholders from Engineering, Product Management and Leadership teams Should be able to influence, get consensus and arrive at the best possible solutions for complex problems Should be a quick learner, up to date with current technology trends, demonstrate courage, challenge status quo and collaborate well with cross-functional teams Desired skills & experience Hands on experience in at least in any one of Angular 2, React, Vue.js Good understanding of advanced web technologies like WebSocket, Server-Sent Events etc. Solid understanding of Core Java fundamentals Good understanding of cloud native design patterns Good understanding and preferably experience in building progressive web apps (PWA) Experience as full stack developer using any one NoSQL Database preferably MongoDB Good understanding of automation testing (TDD) and automated deployments (CI/CD) Good understanding of Web Security, experience developing secure Web Applications Exposure to tools related to Observability (New Relic, Datadog) Familiarity with Agile methodologies and apply it in Spirit and Letter Nice to have Experience in Elasticsearch, Hazelcast, Storm Experience in Spring Boot and Spring Cloud Experience in Pivotal Cloud Foundry
Key Responsibilities Rewrite existing APIs in NodeJS. Remodel the APIs into Micro services-based architecture. Implement a caching layer wherever possible. Optimize the API for high performance and scalability. Write unit tests for API Testing. Automate the code testing and deployment process. Skills Required At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds. Excellent hands-on experience using MySQL or any other SQL Database. Good knowledge of MongoDB or any other NoSQL Database. Good knowledge of Redis, its data types, and their use cases. Experience with graph-based databases like Neo4j. Deep expertise and hands-on experience with Web Applications and related programming languages such as HTML, CSS & CSS Preprocessors, jQuery. Experience developing and deploying REST APIs. Good knowledge of Unit Testing and available Test Frameworks. Good understanding of advanced JS libraries and frameworks such as React. Ability to make changes in a backward compatible manner. Experience with Web sockets, Service Workers, and Web Push Notifications. Familiar with NodeJS profiling tools. Strong with algorithms. Proficient understanding of code versioning tools such as Git. Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms. Should be a fast learner and a go-getter — without any fear of trying out new things Preferences. Experience building a large scale social or location-based app.
Web Engineer (Location - Bangalore) We are looking for Web developers for frontend with 0-3 years experience who thrive on solving problems and building a platform that is highly efficient, scalable and user-friendly. We do not care about college names, grades or any work experiences. What we do care about is your attitude, ability to get things done and the urge to work hard. About Glynk Glynk is the most evolved SaaS platform for effective customer engagement for businesses of all sizes to connect and engage their customers, partners, alumni, and employees like never before. It’s like Shopify, but for brand communities. We provide tools to build a secure, scalable and engaged community that helps build brand loyalty and increase retention significantly. About our culture One for all. All for one. Teamwork is everything at Glynk. If you win - the team wins, if you lose - the team loses. If you're passionate about helping people build great communities or taking a leap in your career, we should talk. Our strength is in execution. We don't just come up with great ideas but we also strive to execute them. You will always be in the know of how your work is going to impact the business, communities or society. "Why?" is one thing we will never get tired of hearing. You have our trust. Once you are a 'Glynker', we trust you to do amazing things. At Glynk, you have ownership over things that directly impacts the business. You can move fast, and learn even faster. There is no inner circle. Everyone is encouraged to share information, knowledge, and ideas. Whenever in doubt, think like a leader - with a wider perspective.
KaiOS is a mobile operating system for smart feature phones that stormed the scene to become the 3rd largest mobile OS globally (2nd in India ahead of iOS). We are on 100M+ devices in 100+ countries. We recently closed a Series B round with Cathay Innovation, Google and TCL. Key skills: Should have knowledge about Cloud, Installation and Maintenance of NOSql database, standard HTTP technologies. 4 to 7 Years of experience in Linux Administration and Shell/Perl/python/ruby scripting Cloud Knowledge (AWS, Docker) VMWare, VxSphere DataBase (MySQL, NOSQL, Cassandra, HBase) Standard HTTP Technologies..(Apache, NGInx, HAProxy) Mandatory Skills: Cassandra or NGInx or HAProxy Responsibilities: Monitor backend servers • Installation of the released solutions to AWS or Cloud • Log analysis • Fixing the issues in cloud environment Requirements Designation: DevOps Engineer Location: Bangalore Experience: 4 to 7 Years Notice period: 30 days or Less
Key Benefits: Competitive salary Regular and transparent appraisals based on performance Personal development budget of 80,000 per year to be used for courses, books, conferences Office-space at WeWork Salarpuria Symbiosis, which offers a lively atmosphere and lots of free events and amenities 25 days paid holiday each year Flexible working hours Work from home Wednesdays Free lunches on work from office days Bi-weekly tech lunches for a fun learning experience Monthly social outings 2 weeks personal project time each year Yearly cross-tribe pollinations (we have another engineering team in London) Wholesome development experience with MacBook Pro and extra monitor(s) Role Overview: As a Senior Software Engineer, you will help drive Unibuddys growth by helping build and enhance our products that are aimed at aiding prospective universities and students, turning them into happy and successful users of Unibuddy. Were looking for an experienced and enthusiastic engineer to join our engineering team in Bangalore and help accelerate our next phase of growth. As we are a rapidly growing company you will gain exposure to all areas of the platform, understanding the key success drivers at an early stage engineering team and gaining invaluable experience for your future career. Success in this role will lead to opportunities for growth across the entire engineering team with significant scope for future development. Key Responsibilities: Quality and consistency: Consistently deliver high quality code. Maintain code quality across the team by reviewing code written by other members of the team Constant Learning: Acquire expertise of the codebase to answer questions for newer devs on how to design features around our codebase Knowledge sharing: Have extended knowledge of design patterns to help newer devs on finding the appropriate coding solution to a technical problem Mentorship: Organising knowledge sharing session when finds out a domain is uncovered by other members of the team Setting standards: Proposes set of coding standards for the team Going beyond: explores new technologies and new techniques the team could use External point of contact: should be able to answer questions about the stack from other non-technical teams Required Candidate profile Key Requirements Bachelor's Degree in Computer Science or any other related field. The equivalent of this in working experience is also acceptable for the position Self-motivated, hard-working, coachable, and driven with a strong entrepreneurial spirit Enjoys working in a collaborative atmosphere where new ideas are valued 4+ years of overall professional web development experience Communicates effectively in fluent English Experienced in ReactJS and React Native Experienced in Python, Flask, GraphQL and MongoDB Experienced in industry standard coding practices Experienced in mentoring junior software developers Have experience working on projects involving test-driven software development Most Important: Understands the value of our product and is driven to make it a success
We are looking for a Node.js Developer who is proficient with writing API's, working with data, using AWS and capable of applying algorithms mainly machine learning-based to solve problems and create/modify features for our students. Your primary focus will be the development of all server-side logic, definition and maintenance of the central database, and ensuring high performance and responsiveness to requests from the front-end. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of front-end technologies is necessary as well. Responsibilities Integration of user-facing elements developed by front-end developers with server-side logic Writing reusable, testable, and efficient code Design and implementation of low-latency, high-availability, and performant applications Implementation of security and data protection Use of algorithms to drive data analytics and features. Ability to use AWS to solve scale issues. Apply if you can only arrive for a face to face interview in Bangalore.
Company DescriptionFounded in February 2019, NYC based, Venture Backed, and well capitalized Liquidity Digital is establishing a new regulatory-compliant ecosystem for private capital formation and securities lifecycle management, enabling accessibility, transparency, efficiency, and liquidity on a global scale. We value agility, innovation, and ability to take and incorporate constructive feedback. This position will be foundational in establishing our presence and building our team in Bangalore.PositionWe are looking for a great Backend developer who is proficient with node.js programming to create highly scalable applications to be deployed on AWS platform. Your primary focus will be on developing back-end services that are going to be used by our frontend layer. These services will, in turn, synthesize the entire business and execution model by calling other services, fetching data from databases and performing defined business logic. You will ensure that these services and the overall application are scalable, robust and easy to maintain.You will coordinate with the rest of the team working on different layers of the infrastructure. Therefore, a commitment to collaborative problem solving, sophisticated software design, and quality product is important.Activities-Developing enterprise-grade Web Services application using node.js-Designing APIs for the underlying business use cases-Building reusable components and a library of services for future use. -Experience working on Microservices architecture is required-Managing & maintaining the platform where applications are going to be deployed, preferably AWS or any other PaaS-Optimizing components for maximum performance for increasing load and endurance-Writing extensive unit tests and automated system test cases-Writing an optimum level of technical documentation for future developers-Provide constructive feedback to design and product teams.-We are flexible in terms of tech stack. But we prefer node.js / python (django)-Building Enterprise grade infrastructure and taking it to the production-Supervise Team of Back End Devs, and oversee production and quality control-Write Engineering Requirement Documents with Engineering Manager.-Discuss and lay out project specifications.-Review System designs and quality tests with Engineering Manager.-Fluent in English-Attention to Detail Requirements-More than 5 years of experience in software development-Past experience on projects in financial industry that went into production.-Problem solving skills-Experience working in agile environment-Extensive experience collaborating in an engineering team (GIT)-Aware of modern best practices and patterns in chosen language/framework-Experience in Unit and integration testing-Aware of Security best practices (e.g. OWASP)-Experience in relational DBs (MySQL/PostgreSQL) and nosql DBs (MongoDB)-RESTful API-Caching (Memcached)-Authentication methods (OAuth, JWT, OpenID, Tokens)-Experience in Microservice architecture, containerisation and containers orchestration (Docker, Kubernetes, CircleCI)-Logs Management-Experience in ML on Python is a plusNice to have-Experience in fintech startups-Building CI/CD pipelineWe’re looking for someone who has a demonstrable track record in being self-directed and resourceful and is a strong communicator and advocate for the front end experience.This role provides competitive salary and benefits.
About Aviso:Aviso is the AI Compass that guides Sales and Go-to-Market teams to close more deals, accelerate revenue growth, and find their True North. We are a global company with offices in Redwood City, San Francisco, Hyderabad, and Bangalore. Our customers are innovative leaders in their market. We are proud to count Dell, Honeywell, MongoDB, Glassdoor, Splunk, FireEye, and RingCentral as our customers, helping them drive revenue, achieve goals faster, and win in bold new frontiers. Aviso is backed by Storm Ventures, Shasta Ventures, Scale Venture Partners and leading Silicon Valley technology investorsWhat you will be doing:● Aviso is in the process of building a highly scalable, and highly performant upgrade to its industry-leading AI product. This requires a complete rethink of the base architecture of how data is stored and accessed using persistent Databases like MongoDB, PostgreSQL and Redshift.● As part of this far reaching engineering goal, This role will be primarily responsible for the design and development of said re-architecture, working with all other parts of the Engineering organization.Details below:○ You will be responsible for designing the most optimal data schema architectureto handle billions of rows of information, accessible in real-time, both for thepurposes of enabling our Machine Learning team, as well as other engineeringteams presenting complex analytical functionality directly to customers.○ You will be responsible for designing the most optimal physical databasearchitecture to scale reliably with business growth, while optimizing cost.● You will be working with our platform team to create the Service Oriented Architecture needed to create a highly redundant, fail-safe and responsive end customer facing service.○ Needed is a solid working experience and understanding of the AWSenvironment, including VPC, EC2, EBS, S3, RDS, SQS, Cloud Formation, NATGateways, Lambda will be needed in order to achieve this.● You will also own the definition and implementation of enterprise grade security using their skillsets around LDAP integration, security policies, and auditing in a Linux/AWS environment● Additionally, You will be responsible for designing the Continuous Integration andContinuous Delivery (CI/CD) platforms to enable all of engineering to deliver code faster with better quality.○ In this, they will be working daily with the QA and engineering teams to enableunit tests and automation tests to increase code test coverage.○ Using their experience and great understanding of DevOps automation -Orchestration/Configuration Management and CI/CD tools (Puppet, Chef,Jenkins, etc.), they will identify the right set of CI/CD tools needed to make this asuccess.What you bring:● Minimum 10-15 year experience in database architecture and management● A Degree in Computer Science from a top University or equivalent● Strong and relevant experience building and maintaining a high performance, high volume SaaS solution.● Industry leading experience in managing petabyte scale databases.● Solid working experience and good understanding of the AWS environment, including VPC, EC2, EBS, S3, RDS, SQS, Cloud Formation, NAT Gateways, Lambda and Redshift.● Great understanding of DevOps automation - Orchestration/Configuration Management and CI/CD tools (Puppet, Chef, Jenkins, etc.) is required.● Experience implementing role based security, including LDAP integration, security policies, and auditing in a Linux/Hadoop/AWS environment. We expect hands on experience with monitoring tools such as AWS CloudWatch, Nagios or Datadog.● Networking : Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture.● Strong experience in continuous integration, builds automation, configurationmanagement, code repository, performance engineering, application monitoring, system monitoring, management and deployment automation.● Strong knowledge of Unix systems engineering with experience in Ubuntu or Red Hat Linux.● Programming: Experience programming with Python, Unix scriptsAviso offers● Dynamic, diverse, inclusive startup environment driven by transparency and velocity● Bright, open, sunny working environment and collaborative office space● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs● Competitive salaries and company equity, and a focus on developing world class talent operations● Comprehensive health insurance available (medical) for you and your family● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service● CEO moonshots projects with cash awards every quarter● Upskilling and learning support including via paid conferences, online courses, and certifications● Every month Rupees 2,500 will be credited to Sudexo meal card
Who we are? Searce is a Cloud, Automation & Analytics led business transformation company focussed on helping futurify businesses. We help our clients become successful by helping reimagine ‘what's next’ and then enabling them to realize that ‘now’. We processify, saasify, innovify & futurify businesses by leveraging Cloud | Analytics | Automation | BPM. What we believe? Best practices are overrated Implementing best practices can only make one ‘average’. Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great vada-pao vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self-motivated. Self-governing teams. We own it. We welcome *really unconventional* creative thinkers who can work in an agile, flexible environment. We are a flat organization with unlimited growth opportunities, and small team sizes – wherein flexibility is a must, mistakes are encouraged, creativity is rewarded, and excitement is required. Introduction When was the last time you thought about rebuilding your smartphone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk. We are quite keen to meet you if: You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people. You like experimenting, taking risks and thinking big. 3 things this position is NOT about: This is NOT just a job; this is a passionate hobby for the right kind. This is NOT a boxed position. You will code, clean, test, build and recruit and you will feel that this is not really ‘work’. This is NOT a position for people who like to spend time on talking more than the time they spend doing. 3 things this position IS about: Attention to detail matters. Roles, titles, the ego does not matter; getting things done matters; getting things done quicker and better matters the most. Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars? Roles and Responsibilities Drive and define database design and development of real-time complex products. Strive for excellence in customer experience, technology, methodology, and execution. Define and own end-to-end Architecture from definition phase to go-live phase. Define reusable components/frameworks, common schemas, standards to be used & tools to be used and help bootstrap the engineering team. Performance tuning of application and database and code optimizations. Define database strategy, database design & development standards and SDLC, database customization & extension patterns, database deployment and upgrade methods, database integration patterns, and data governance policies. Architect and develop database schema, indexing strategies, views, and stored procedures for Cloud applications. Assist in defining scope and sizing of work; analyze and derive NFRs, participate in proof of concept development. Contribute to innovation and continuous enhancement of the platform. Define and implement a strategy for data services to be used by Cloud and web-based applications. Improve the performance, availability, and scalability of the physical database, including database access layer, database calls, and SQL statements. Design robust cloud management implementations including orchestration and catalog capabilities. Architect and design distributed data processing solutions using big data technologies - added advantage. Demonstrate thought leadership in cloud computing across multiple channels and become a trusted advisor to decision-makers. Desired Skills Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform. Hands-on experience in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, etc. Knowledge of NoSQL stores is a plus). Knowledge of other transactional Database Management Systems/Open database system and NoSQL database (MongoDB, Cassandra, Hbase etc.) is a plus. Good knowledge of data management principles like Data Architecture, Data Governance, Very Large Database Design (VLDB), Distributed Database Design, Data Replication, and High Availability. Must have experience in designing large-scale, highly available, fault-tolerant OLTP data management systems. Solid knowledge of any one of the industry-leading RDBMS like Oracle/SQL Server/DB2/MySQL etc. Expertise in providing data architecture solutions and recommendations that are technology-neutral. Experience in Architecture consulting engagements is a plus. Deep understanding of technical and functional designs for Databases, Data Warehousing, Reporting, and Data Mining areas. Education & Experience Bachelors in Engineering or Computer Science (preferably from a premier School) - Advanced degree in Engineering, Mathematics, Computer or Information Technology. Highly analytical aptitude and a strong ‘desire to deliver’ outlives those fancy degrees! More so if you have been a techie from 12. 2-5 years of experience in database design & development 0- Years experience of AWS or Google Cloud Platform or Hadoop experience Experience working in a hands-on, fast-paced, creative entrepreneurial environment in a cross-functional capacity.