Senior PostgreSql Developer
Hello,
Greetings for the day !!!
Tridat Technologies is hiring “Senior PostgreSql Developer" for one of the tech based organization specialized in banking domain @ Navi Mumbai !!!
JOB QUALIFICATIONS & PROFESSIONAL SKILLS:
• Bachelor's degree
What You Will Do:
- Ensure top quality for our PostgreSQL offerings
- Take an active role in the development lifecycle and enforce appropriate quality gates at the required stages.
- Analyze and continuously improve the test automation frameworks and infrastructure
- Monitor the coverage and efficacy of the automated test suites and take proactive actions to ensure satisfactory results
- Research and recommend new tools, strategies, or processes to improve testing and delivery capabilities.
- Prepare test plans for upcoming releases and ensure appropriate visibility for delivery status and test results.
Your Experience:
- Knowledge and experience working with SQL Databases
- Working with multiple automation tools and frameworks
- Solid debugging skills (analyze logs, spot patterns, etc.)
- Hands-on experience with Ansible, Jenkins, Python, and Github
- Familiarity with the Linux ecosystem
What Will Make You Stand Out:
- Exposure to database-related technologies
- Experience with compiling/building/packaging
- Worked with container or virtualization environments
- Experience working with PostgreSQL
- Working experience in the opensource world or contributions to open-source projects
- Experience with Molecule
- Great attitude and capability to easily communicate and work well with people from distributed teams.
EXPERIENCE: 7+ years
Key role interactions
• Will be expected to work from office
Location: Rabale, Navi Mumbai
Working Days: Monday to Friday
Employment Mode: Contract to hire (Full time opportunity)
Joining Period: Immediate to max 15 days
Thank You & Regards,
Shraddha Kamble
HR Recruiter
About A leading IT based organization into Banking domain
Similar jobs
· Perl as a Developer.
· Python
· Strong knowledge of MS SQL Server (or any database like Sybase/Oracle/MySQL etc),
· Good understanding of RHEL Linux,
· Good understanding and hands-on with HTML and web technologies.
Secondary Skills (Good to have)
DevOps, Ansible, Git/S
🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐
Hello
We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!
Position: Data Engineer
Location: Gurugram (Gurgaon)
Experience: 5+ years
Key Skills:
- Python
- Spark, Pyspark
- Data Governance
- Cloud (AWS/Azure/GCP)
Main Responsibilities:
- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.
- Implement ETL processes for telemetry-based and stationary test data.
- Support in defining data governance, including data lifecycle management.
- Develop large-scale data processing engines and real-time search and analytics based on time series data.
- Ensure technical, methodological, and quality aspects.
- Support CI/CD processes.
- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.
- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.
Qualification Requirements:
- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.
- Proficiency in Python and the PyData stack (Pandas/Numpy).
- Experience in high-level programming languages (C#/C++/Java).
- Familiarity with scalable processing environments like Dask (or Spark).
- Proficient in Linux and scripting languages (Bash Scripts).
- Experience in containerization and orchestration of containerized services (Kubernetes).
- Education in database technologies (SQL/OLAP and Non-SQL).
- Interest in Big Data storage technologies (Elastic, ClickHouse).
- Familiarity with Cloud technologies (Azure, AWS, GCP).
- Fluent English communication skills (speaking and writing).
- Ability to work constructively with a global team.
- Willingness to travel for business trips during development projects.
Preferable:
- Working knowledge of vehicle architectures, communication, and components.
- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).
- Experience in time-series processing.
How to Apply:
Interested candidates, please share your updated CV/resume with me.
Thank you for considering this exciting opportunity.
NP – Immediate to 60 Days
Work location – Cisco Manesar Office
Experience – 2 to 10 Years.
- Strong coding Experience in programming language like Python , Java, C
- Experience of Yang data modelling
- Experience on REST/ SOAP API#s
- Experience of frameworks such as Request , Beautiful Soup
Role : Senior Customer Scientist
Experience : 6-8 Years
Location : Chennai (Hybrid)
Who are we?
A young, fast-growing AI and big data company, with an ambitious vision to simplify the world’s choices. Our clients are top-tier enterprises in the banking, e-commerce and travel spaces. They use our core AI-based choice engine http://maya.ai/">maya.ai, to deliver personal digital experiences centered around taste. The http://maya.ai/">maya.ai platform now touches over 125M customers globally. You’ll find Crayon Boxes in Chennai and Singapore. But you’ll find Crayons in every corner of the world. Especially where our client projects are – UAE, India, SE Asia and pretty soon the US.
Life in the Crayon Box is a little chaotic, largely dynamic and keeps us on our toes! Crayons are a diverse and passionate bunch. Challenges excite us. Our mission drives us. And good food, caffeine (for the most part) and youthful energy fuel us. Over the last year alone, Crayon has seen a growth rate of 3x, and we believe this is just the start.
We’re looking for young and young-at-heart professionals with a relentless drive to help Crayon double its growth. Leaders, doers, innovators, dreamers, implementers and eccentric visionaries, we have a place for you all.
Can you say “Yes, I have!” to the below?
- Experience with exploratory analysis, statistical analysis, and model development
- Knowledge of advanced analytics techniques, including Predictive Modelling (Logistic regression), segmentation, forecasting, data mining, and optimizations
- Knowledge of software packages such as SAS, R, Rapidminer for analytical modelling and data management.
- Strong experience in SQL/ Python/R working efficiently at scale with large data sets
- Experience in using Business Intelligence tools such as PowerBI, Tableau, Metabase for business applications
Can you say “Yes, I will!” to the below?
- Drive clarity and solve ambiguous, challenging business problems using data-driven approaches. Propose and own data analysis (including modelling, coding, analytics) to drive business insight and facilitate decisions.
- Develop creative solutions and build prototypes to business problems using algorithms based on machine learning, statistics, and optimisation, and work with engineering to deploy those algorithms and create impact in production.
- Perform time-series analyses, hypothesis testing, and causal analyses to statistically assess the relative impact and extract trends
- Coordinate individual teams to fulfil client requirements and manage deliverable
- Communicate and present complex concepts to business audiences
- Travel to client locations when necessary
Crayon is an equal opportunity employer. Employment is based on a person's merit and qualifications and professional competences. Crayon does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, marital status, pregnancy or related.
More about Crayon: https://www.crayondata.com/">https://www.crayondata.com/
More about http://maya.ai/">maya.ai: https://maya.ai/">https://maya.ai/
- 3to 4years of professional experience as a DevOps / System Engineer
- Command line experience with Linux including writing bash scripts
- Programming in Python, Java or similar
- Fluent in Python and Python testing best practices
- Extensive experience working within AWS and with it’s managed products (EC2, ECS, ECR,R53,SES, Elasticache, RDS,VPCs etc)
- Strong experience with containers (Docker, Compose, ECS)
- Version control system experience (e.g. Git)
- Networking fundamentals
- Ability to learn and apply new technologies through self-learning
Responsibilities
- As part of a team implement DevOps infrastructure projects
- Design and implement secure automation solutions for development, testing, and productionenvironments
- Build and deploy automation, monitoring, and analysis solutions
- Manage our continuous integration and delivery pipeline to maximize efficiency
- Implement industry best practices for system hardening and configuration management
- Secure, scale, and manage Linux virtual environments
- Develop and maintain solutions for operational administration, system/data backup, disasterrecovery, and security/performance monitoring
As a Data Engineer you have to understand Organisation data sets and how to bring them together. You have to work with sales engineering team to support custom solutions offered to the client. Filling the gap between development, sales engineering and data ops and creating, maintaining and documenting scripts to support ongoing custom solutions.
Job Responsibilities:
- Collaborating across an agile team to continuously design, iterate, and develop big data systems.
- Extracting, transforming, and loading data into internal databases.
- Optimizing our new and existing data pipelines for speed and reliability.
- Deploying new products and product improvements
- Documenting and managing multiple repositories of code.
Mandatory Requirements:
- Experience with Pandas to process the data and Jupyter notebooks to keep it all together.
- Familiar with pulling and pushing files from SFTP and AWS S3. Familiarity with AWS Athena and Redshift is mandatory.
- Familiarity with SQL programming to query and transform data from relational Databases.
- Familiarities with AWS Cloud and Linux (and Linux work environment) are mandatory.
- Excellent written and verbal communication skills.
Desired Requirements:
- Excellent organizational skills, including attention to precise details.
- Strong multitasking skills and ability to work in a fast-paced environment Know your way around REST APIs (Able to integrate not necessary to publish)
Qualities:
- Python
- Sql
- API
- AWS
- GCP
- OCI
- Azure
- Redshift
Eligibility Criteria:
- 5 years experience in database systems
- 3 years experience with Python to develop scripts
What’s for the Candidate:
12 LPA
Job Location(s)
Hyderabad / Remote
We are looking for coders, people who love to code, just like we do!
You should have minimum 10 years of work experience designing and developing high-end software products
You should have minimum 3 years of experience building AWS cloud native services using EC2, S3, ECS,
SQS, API Gateway, Lambda, Elastic, MongoDB, SNS, SQS etc
You should be highly proficient in Microservices, API, Python, React, JSON.
.
- Should have experience in application architecture and design
- Proficiency with developing APIs in Python
- Proficiency with frontend web development (JavaScript, CSS, HTML)
- Should have 2-4 years’ experience in leading the team
- Experience object oriented programming (OOP) concepts using Python
- Experienced with full software development life-cycle, programming, database design and agile methodologies
- Experience in using Design Patterns such as MVC and frameworks such as Django, Flask
- Ability to successfully multi-task and prioritize work.
- Experience with developing applications with Angular (2+)
- Experience with Docker and Kubernetes
- Experience with SQL databases, especially PostgreSQL
- Good Communication Skills
- Behavioural:
- Good Leader
- Adaptable / Flexible
- Candidate will be a part of Product engineering global delivery team working as a Technical Lead, leading a team highly talented engineers.
- Candidate would be required (but not limited) to:
- Team management, mentoring, prioritization of work
- Work with Architecture group for defining Product Roadmap
- Requirement gathering with stake holder for Product enhancement
- Designing, Developing, and documenting the solution
- Providing support for any application issue like performance, availability
- Contributing to actual development of user stories and features etc.
- Ensuring conformity to corporate security and compliance objectives.
- Identifying and implementing service improvement opportunities.
- Responsible for informing the ‘business impact’ of security within the team
- Promptly report security weakness or incidents to the Practice Managers/Leads
• Strong knowledge of data structure and algorithms
• Ability to write complex SQL
• Familiarity with Test driven development and Continuous Integration
• Strong knowledge and hands-on with code development tools (Eclipse, GIT, Jenkins, Unit, Testing Frameworks)
• Familiar with Software development methodology like Agile methodology
• Knowledge of JavaScript and AngularJS would be a plus
• Knowledge of Java and Python
• Knowledge of Unix and shell scripting would be a plus
• Strong leadership skills
• Desire to learn and develop new tools and techniques and share with the team
• Capability to mentor junior team members
• Active involvement is required in all phases of Software Development
• Ability to establish trusted partnership with executive level stakeholders
Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.
Responsibilities and Duties :
- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.
- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions
Education level :
- Bachelor's degree in Computer Science or equivalent
Experience :
- Minimum 3+ years relevant experience working on production grade projects experience in hands on, end to end software development
- Expertise in application, data and infrastructure architecture disciplines
- Expert designing data integrations using ETL and other data integration patterns
- Advanced knowledge of architecture, design and business processes
Proficiency in :
- Modern programming languages like Java, Python, Scala
- Big Data technologies Hadoop, Spark, HIVE, Kafka
- Writing decently optimized SQL queries
- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)
- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions
- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.
- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.
- Experience generating physical data models and the associated DDL from logical data models.
- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.
- Experience enforcing data modeling standards and procedures.
- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.
- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals
Skills :
Must Know :
- Core big-data concepts
- Spark - PySpark/Scala
- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)
- Handling of various file formats
- Cloud platform - AWS/Azure/GCP
- Orchestration tool - Airflow