- Sr. Solution Architect
- Job Location – Bangalore
- Need candidates who can join in 15 days or less.
- Overall, 12-15 years of experience.
Looking for this tech stack in a Sr. Solution Architect (who also has a Delivery Manager background). Someone who has heavy business and IT stakeholder collaboration and negotiation skills, someone who can provide thought leadership, collaborate in the development of Product roadmaps, influence decisions, negotiate effectively with business and IT stakeholders, etc.
- Building data pipelines using Azure data tools and services (Azure Data Factory, Azure Databricks, Azure Function, Spark, Azure Blob/ADLS, Azure SQL, Snowflake..)
- Administration of cloud infrastructure in public clouds such as Azure
- Monitoring cloud infrastructure, applications, big data pipelines and ETL workflows
- Managing outages, customer escalations, crisis management, and other similar circumstances.
- Understanding of DevOps tools and environments like Azure DevOps, Jenkins, Git, Ansible, Terraform.
- SQL, Spark SQL, Python, PySpark
- Familiarity with agile software delivery methodologies
- Proven experience collaborating with global Product Team members, including Business Stakeholders located in NA
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)
Similar jobs
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fc.png&w=32&q=75)
GroundTruth is the leading location-based marketing and advertising technology company. Sitting at the convergence of offline and online data, GroundTruth delivers a unique data set called “visitation data,” which allows brands, agencies, SMBs, and nonprofits to drive high-performing business outcomes (ROI). GroundTruth activates this data through a suite of performance products and services via their self-serve advertising platform, through managed services, or tailored partnerships. GroundTruth has built proprietary cleansing processes that combine their Blueprint’s contextual mapping technology, owned & operated properties, along with 3rd party mobile location data, together yielding over 30 Billion visits annually.
Please visit link: https://go.groundtruth.com/rs/115-ZBZ-379/images/GroundTruth-Technology-Video.mp4">https://go.groundtruth.com/rs/115-ZBZ-379/images/GroundTruth-Technology-Video.mp4
Position: Senior Software Engineer – C++
Location: Gurgaon/Remote
About the advertising service architecture team:
The advertising service architecture team is responsible for the the federation of services that support our real-time advertising exchange ecosystem. We currently process ~500k transactions per second on AWS infrastructure typically clearing bids in under 25ms. This team focuses on developing and supporting real-time java systems using non-blocking io and event driven architecture to solve problems such as request enrichment, filtering, and general data collection all of which are critical to our operations.
The senior level represents
Expert professionals with significant experience who plan, design, organize, and execute large units of work in collaboration with stakeholders. These employees are subject matter experts in technologies and business practices who coach, mentor, and supervise less experienced staff members.
A bit about you
You will:
- Lead the engineering efforts across multiple software components
- Write excellent production code and tests and help others improve in code-reviews
- Analyze high-level requirements to design, document, estimate, and build systems
- Coordinate across teams to identify, resolve, mitigate, and prevent technical issues
- Coach and mentor engineers within the team to develop their skills and abilities
- Continuously improve the team's practices in code-quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processes
You have:
- Education and professional experience:
- PHD or Master’s degree in Computer or equivalent with 5+ years’ experience in technology
- The following skills: C++, C++14, Unix, Multithreading and Data structures
.Additional nice to have skills/certifications: Perl, Shell scripting, Python, Awk, Sed, Protocol Buffers, Aerospike, Redis, Kinesis. Good hands experience on AWS.
You are:
- A team player who is organized, flexible and willing to adapt
- Not afraid of new technologies and driven to learn
- A detail-oriented person, who catches problems early and adjusts
- A strong communicator who can collaborate with multiple business and engineering stakeholders and work through conflicting needs
- A problem solver who likes to dive deep into a problem, diagnose root causes and work with multiple teams to come up with a solution
What we offer
At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.
- Unlimited Paid Time Off
- In Office Daily Catered Lunch
- Fully stocked snacks/beverages
- 401(k) employer match
- Health coverage including medical, dental, vision and option for HSA or FSA
- Generous parental leave
- Company-wide DEIB Committee
- Inclusion Academy Seminars
- Wellness/Gym Reimbursement
- Pet Expense Reimbursement
- Company-wide Volunteer Day
- Education reimbursement program
- Cell phone reimbursement
- Equity Analysis to ensure fair pay
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fc.png&w=32&q=75)
Assisting the development manager with all aspects of software design and coding.
Attending and contributing to company development meetings.
Learning the codebase and improving your coding skills.
Writing and maintaining code.
Working on minor bug fixes.
Monitoring the technical performance of internal systems.
Responding to requests from the development team.
Gathering information from consumers about program functionality.
Writing reports.
Conducting development tests.
Required Skills
- Knowledge of basic coding languages including C++, HTML5, and JavaScript.
- Basic programming experience.
- Knowledge of databases and operating systems.
- Good working knowledge of email systems and Microsoft Office software.
- Ability to learn new software and technologies quickly.
- Ability to follow instructions and work in a team environment.
- Detail-oriented.
- You would be responsible for Developing SQL Databases, automation of business reports and Developing Business Dashboard using BI Tools.
- Development of high quality database solutions
- Create complex functions, scripts, stored procedures and triggers to support application development.
- Develop, implement and optimize stored procedures and functions using T-SQL
- Review and interpret ongoing business report requirements
- Build appropriate and useful reporting deliverables
- Analyze existing SQL queries for performance improvements
- Fix any issues related to database performance and provide corrective measures.
- MIS Automation
- Provide timely scheduled management reporting
- Create Business dashboard using Tableau BI Tool
Requirement / Desired Skills:
- 5+ years of business Analyst / SQL developer experience in Fintech,internet,consulting etc industry
- Excellent understanding of T-SQL programming
- Excellent understanding of Microsoft SQL Server
- Good knowledge of HTML and JavaScript
- SQL Server Reporting Services and SQL Server Analysis Services
- Knowledge of any of the BI Tool (Tableau, PowerBI or Qlikview)
This person MUST have:
- BE Computer Science or equivalent
- Cloud app development experience.
- Strong Troubleshooting and debugging skills
- A strong passion for writing simple, clean, and efficient code.
- 3 years of experience with the Django framework and other backend technologies.
- Knowledge of NodeJS
- Experience with building, modifying, and extending API endpoints (REST or GraphQL) for data retrieval and persistence.
- Understand how to use a database like Postgres (preferred choice), SQLite, MongoDB, MySQL.
- Experience creating high-performance applications.
- Experience with messaging and broker tools - Rabbitmq, MQTT
- Experience with SQL and NoSQL databases
- Experience with the full software development life cycle, including requirements collection, design, implementation, testing, and operational support.
- Knowledge of web services
- Proficient understanding of code versioning tools Git.
- Hands-on experience deploying and managing infrastructure with CloudFormation/Terraform
- Experience managing AWS infrastructure.
- Hands-on experience in Linux environment.
- Basic understanding of Kubernetes/Docker orchestration.
- Manges existing infrastructure/Pipelines/Engineering tools (On-Prem or AWS) for the engineering team (Build servers/Jenkins nodes etc.)
- Experience with scrum or other agile software development methodology.
- Excellent verbal and written communication, teamwork, decision making and
- influencing skills.
- Handle customer calls/emails regarding technical issues for end-users.
- Strong communication skills
- Attention to detail.
Experience:
- Min 3 year experience
Location:
- Ahmedabad Office Or,
- Work from home
Timings:
- 40 hours a week with a rotational shift every month.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Freact.png&w=32&q=75)
A network of the world's best developers - full-time, long-term remote software jobs with better compensation and career growth. We enable our clients to accelerate their Cloud Offering, and Capitalize on Cloud. We have our own IOT/AI platform and we provide professional services on that platform to build custom clouds for their IOT devices. We also build mobile apps, run 24x7 devops/site reliability engineering for our clients.
You will be working on AI/IOT cloud and responsible for architecture and ensuring the aspects such as fault tolerance and scalability. You will lead the troubleshooting of the cloud. We need someone who can engage in conversations about detailed technical architecture with an internal team and external customers. You will be responsible for creating architecture documents, explaining design, troubleshooting, reviewing work by others in the team, giving good guidance, and ensuring functional cloud. We are looking for a can do it attitude, and ability to deliver with high velocity and high-quality at the same time.
We use MQTT, RabbitMQ, Cassandra, MongDB, CloudFlare and backend components to build APIs. You will be working on our IOT/AI cloud platform and building/architecting/coding as well as leading other technical engineers in a team.
The primary programming language should be Python or ReactJS or NodeJS and one of them as secondary.
This person MUST have:
- BE Computer Science, MCA or equivalent
- Cloud app development experience
- Strong Troubleshooting/Debugging experience
- Expert in RabbitMQ internals
- Experience with SQL and NOSQL databases
- Strong communication skills
- experience with software build and devops
- strong in programming
- experience with Cassandra
This person’s RESPONSIBILITY will be to:
- understand full arch
- understand every workflow and logs
- can fix bugs by his hands
- able to troubleshoot in cloud
- able to fix and deploy
- able to see database contents
Experience:
- Min 5 year experience
- Not more than 15 year experience.
- Startup experience is a must.
Location
- Remotely, anywhere in India
Timings:
- 40 hours a week but with 4 hours a day overlapping with client timezone. Typically clients are in California PST Timezone.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.
Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.
Primary Responsibilities
- Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
- Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
- Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
- Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
Requirements
- Minimum of 6 years of experience on Devops tools.
- Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
- Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
- Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
- Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
- Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
- Devops mindset and experience with Agile / SCRUM Methodology
- Basic knowledge of Storage , Databases (SQL and noSQL)
- Good understanding of networking technologies, HAProxy, firewalling and security.
- Experience in Security vulnerability scans and remediation
- Experience in API security and credentials management
- Worked on Microservice configurations across dev/test/prod environments
- Ability to quickly adapt new languages and technologies
- A strong team player attitude with excellent communication skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Ability to plan projects, execute them and meet the deadline
- Excellent verbal and written English communication.
Job description
The role encompasses administration of and responsible for MongoDB database and will be responsible for ensuring the database performance, high availability, and security of clusters in MongoDB instances.
- The candidate will be responsible for ensuring that database management policies, processes and procedures are followed, adhering to ITIL good practice principles and are subjected to continuous improvement as per PCI standards.
- He / She will be responsible for reviewing system design changes to ensure they adhere to expected service standards and recommend changes to ensure maximum stability, availability and efficiency of the supported applications.
- The candidate should understand the application functionality, business logic and work with application stakeholders to understand the requirement and discuss the new application features and propose the right solutions.
What you'll do
- Install, deploy and manage MongoDB on physical and virtual machines
- Create, configure and monitor large-scale, secure, MongoDB sharded clusters
- Support MongoDB in a high availability, multi-datacenter environment
- Administer MongoDB Ops Manager monitoring, backups and automation
- Configure and monitor numerous MongoDB instances and replica sets
- Automate routine tasks with your own scripts and open-source tools
- Improve database backups and test recoverability regularly
- Study the database needs of our applications and optimize them using MongoDB
- Maintain database performance and capacity planning
- Write documentation and collaborate with technical peers online
- All database administration tasks like backup, restore, SQL optimizations, provisioning infrastructure, setting up graphing, monitoring and alerting tools, replication
- Performance tuning for high throughput
- Architecting high availability servers
What qualifications will you need to be successful?
Skills and Qualifications
- Minimum 1 years of experience in MongoDB technologies, Total should be 3 years in database administration.
- Install, Deploy and Manage MongoDB on Physical, Virtual, AWS EC2 instances
- Should have experience on MongoDB Active Active sharded cluster setup with high availability
- Should have experience on administrating MongoDB on Linux platform
- Experience on MongoDB version upgrade, preferably from version 4.0 to 4.4, on production environment with a zero or very minimum application down time, either with ops manager or custom script
- Experience on building the database monitoring using tools like, AppD, ELK, Grafana etc.
- Experience in Database performance tuning which include both script tuning and hardware configuration and capacity planning.
- Good Understanding and experience with Mongodb sharding and Disaster Recovery plan
- Design and implement the backup strategy and BCP process across the MongoDB environments. Maintain the uniform backup strategy across the platform
- Define the database monitoring, monitoring thresholds, alerts, validate the notifications and maintain the documents for the future references
- Database performance tuning based on the application requirement and maintain the stable environment. Analyse the existing mongodb queries behalf of the performance improvement program
- Work with engineering team to understand the database requirement and guide them the best practice and optimize the queries to get the better performance
- Work with application stake holders to understand the production requirement and propose the effective database solutions
- Review and understand the ongoing business reports and create new adhoc reports based on the requirement
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
As a Data Engineer you have to understand Organisation data sets and how to bring them together. You have to work with sales engineering team to support custom solutions offered to the client. Filling the gap between development, sales engineering and data ops and creating, maintaining and documenting scripts to support ongoing custom solutions.
Job Responsibilities:
- Collaborating across an agile team to continuously design, iterate, and develop big data systems.
- Extracting, transforming, and loading data into internal databases.
- Optimizing our new and existing data pipelines for speed and reliability.
- Deploying new products and product improvements
- Documenting and managing multiple repositories of code.
Mandatory Requirements:
- Experience with Pandas to process the data and Jupyter notebooks to keep it all together.
- Familiar with pulling and pushing files from SFTP and AWS S3. Familiarity with AWS Athena and Redshift is mandatory.
- Familiarity with SQL programming to query and transform data from relational Databases.
- Familiarities with AWS Cloud and Linux (and Linux work environment) are mandatory.
- Excellent written and verbal communication skills.
Desired Requirements:
- Excellent organizational skills, including attention to precise details.
- Strong multitasking skills and ability to work in a fast-paced environment Know your way around REST APIs (Able to integrate not necessary to publish)
Qualities:
- Python
- Sql
- API
- AWS
- GCP
- OCI
- Azure
- Redshift
Eligibility Criteria:
- 5 years experience in database systems
- 3 years experience with Python to develop scripts
What’s for the Candidate:
12 LPA
Job Location(s)
Hyderabad / Remote
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Looking for a AWS Architect for our client branch in Noida
Experience: 8+ years of relevant experience
Location: Noida
Notice period : Immediate to 30 days joiners
Must have: AWS, Spark, Python, AWS Glue, AWS S3, AWS Redshift, Sage Maker, AppFlow, Data Lake Formation, AWS Security, Lambda, EMR, SQL
Good to have:
Cloud Formation, VPC, Java, Snowflake, Talend, RDS, Data Bricks
Responsibilities:
- Mentor Data Architect & Team and also able to Architect and implement ETL and data movement solutions like ELT/ETL/Data pipelines.
- Apply industry best practices, security, manage compliance, adopt appropriate design patterns etc
- Able to work and manage tests, monitors, manage and validate data warehouse activity including data extraction, transformation, movement, loading, cleansing, and updating processes.
- Able to code, test and dataset implementation and maintain and define data engineering standards
- Architect, Design and implement database solutions related to Big Data, Real-time/ Data Ingestion, Data Warehouse, SQL/NoSQL
- Architect, Design and implement automated workflows and routines using workflow scheduling tools
- Identify and Automate to optimize the performance.
- Conduct appropriate functional and performance testing to identify bottlenecks and data quality issues.
- Be able to implement slowly changing dimensions as well as transaction, accumulating snapshot, and periodic snapshot fact tables.
- Collaborate with business users, EDW team members, and other developers throughout the organization to help everyone understand issues that affect the data warehouse.
Company Profile:
Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.
We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.
Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.
Salary: As per company standards.
Designation: Data Engineering
Location: Pune
Experience with ETL, Data Modeling, and Data Architecture
Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.
Experience with AWS cloud data lake for development of real-time or near real-time use cases
Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing
Build data pipeline frameworks to automate high-volume and real-time data delivery
Create prototypes and proof-of-concepts for iterative development.
Experience with NoSQL databases, such as DynamoDB, MongoDB etc
Create and maintain optimal data pipeline architecture,
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow
Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.
Employment Type
Full-time
![icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fsearch.png&w=48&q=75)
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)