Social Media Handling, Marketing & PR (which all platforms)

Similar jobs


Job Title : Software Engineer (.NET, Azure)
Location : Remote
Employment Type : [Full-time/Contract]
Experience Level : 3+ Years
Job Summary :
We are looking for a skilled Software Engineer (.NET, Azure) to develop high-quality, secure, and scalable software solutions. You will collaborate with product owners, security specialists, and engineers to deliver robust applications.
Responsibilities :
- Develop and maintain server-side software using .NET (C#), SQL, and Azure.
- Build and secure RESTful APIs.
- Deploy and manage applications on Azure.
- Ensure version control using Azure DevOps/Git.
- Write clean, maintainable, and scalable code.
- Debug and optimize application performance.
Qualifications :
- 3+ Years of server-side development experience.
- Strong proficiency in .NET, SQL, and Azure.
- Experience with RESTful APIs and DevOps/Git.
- Ability to work collaboratively and independently.
- Familiarity with Scrum methodologies.
• Required Qualifications
Bachelor’s degree or equivalent in Computer Science, Engineering, or related field; or equivalent work experience.
4-10 years of proven experience in Data Engineering
At least 4+ years of experience on AWS Cloud
Strong understanding in data warehousing principals and data modeling
Expert with SQL including knowledge of advanced query optimization techniques - build queries and data visualizations to support business use cases/analytics.
Proven experience on the AWS environment including access governance, infrastructure changes and implementation of CI/CD processes to support automated development and deployment
Proven experience with software tools including Pyspark and Python, PowerBI, QuickSite and core AWS tools such as Lambda, RDS, Cloudwatch, Cloudtrail, SNS, SQS, etc.
Experience building services/APIs on AWS Cloud environment.
Data ingestion and curation as well as implementation of data pipelines.
• Preferred Qualifications
Experience in Informatica/ETL technology will be a plus.
Experience with AI/ML Ops – model build through implementation lifecycle in AWS Cloud environment.
Hands-on experience on Snowflake would be good to have.
Experience in DevOps and microservices would be preferred.
Experience in Financial industry a plus.

The Sr AWS/Azure/GCP Databricks Data Engineer at Koantek will use comprehensive
modern data engineering techniques and methods with Advanced Analytics to support
business decisions for our clients. Your goal is to support the use of data-driven insights
to help our clients achieve business outcomes and objectives. You can collect, aggregate, and analyze structured/unstructured data from multiple internal and external sources and
patterns, insights, and trends to decision-makers. You will help design and build data
pipelines, data streams, reporting tools, information dashboards, data service APIs, data
generators, and other end-user information portals and insight tools. You will be a critical
part of the data supply chain, ensuring that stakeholders can access and manipulate data
for routine and ad hoc analysis to drive business outcomes using Advanced Analytics. You are expected to function as a productive member of a team, working and
communicating proactively with engineering peers, technical lead, project managers, product owners, and resource managers. Requirements:
Strong experience as an AWS/Azure/GCP Data Engineer and must have
AWS/Azure/GCP Databricks experience. Expert proficiency in Spark Scala, Python, and spark
Must have data migration experience from on-prem to cloud
Hands-on experience in Kinesis to process & analyze Stream Data, Event/IoT Hubs, and Cosmos
In depth understanding of Azure/AWS/GCP cloud and Data lake and Analytics
solutions on Azure. Expert level hands-on development Design and Develop applications on Databricks. Extensive hands-on experience implementing data migration and data processing
using AWS/Azure/GCP services
In depth understanding of Spark Architecture including Spark Streaming, Spark Core, Spark SQL, Data Frames, RDD caching, Spark MLib
Hands-on experience with the Technology stack available in the industry for data
management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc
Hands-on knowledge of data frameworks, data lakes and open-source projects such
asApache Spark, MLflow, and Delta Lake
Good working knowledge of code versioning tools [such as Git, Bitbucket or SVN]
Hands-on experience in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair
Experience preparing data for Data Science and Machine Learning with exposure to- model selection, model lifecycle, hyperparameter tuning, model serving, deep
learning, etc
Demonstrated experience preparing data, automating and building data pipelines for
AI Use Cases (text, voice, image, IoT data etc. ). Good to have programming language experience with. NET or Spark/Scala
Experience in creating tables, partitioning, bucketing, loading and aggregating data
using Spark Scala, Spark SQL/PySpark
Knowledge of AWS/Azure/GCP DevOps processes like CI/CD as well as Agile tools
and processes including Git, Jenkins, Jira, and Confluence
Working experience with Visual Studio, PowerShell Scripting, and ARM templates. Able to build ingestion to ADLS and enable BI layer for Analytics
Strong understanding of Data Modeling and defining conceptual logical and physical
data models. Big Data/analytics/information analysis/database management in the cloud
IoT/event-driven/microservices in the cloud- Experience with private and public cloud
architectures, pros/cons, and migration considerations. Ability to remain up to date with industry standards and technological advancements
that will enhance data quality and reliability to advance strategic initiatives
Working knowledge of RESTful APIs, OAuth2 authorization framework and security
best practices for API Gateways
Guide customers in transforming big data projects, including development and
deployment of big data and AI applications
Guide customers on Data engineering best practices, provide proof of concept, architect solutions and collaborate when needed
2+ years of hands-on experience designing and implementing multi-tenant solutions
using AWS/Azure/GCP Databricks for data governance, data pipelines for near real-
time data warehouse, and machine learning solutions. Over all 5+ years' experience in a software development, data engineering, or data
analytics field using Python, PySpark, Scala, Spark, Java, or equivalent technologies. hands-on expertise in Apache SparkTM (Scala or Python)
3+ years of experience working in query tuning, performance tuning, troubleshooting, and debugging Spark and other big data solutions. Bachelor's or Master's degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience
Ability to manage competing priorities in a fast-paced environment
Ability to resolve issues
Basic experience with or knowledge of agile methodologies
AWS Certified: Solutions Architect Professional
Databricks Certified Associate Developer for Apache Spark
Microsoft Certified: Azure Data Engineer Associate
GCP Certified: Professional Google Cloud Certified
- Lead design initiatives end to end to help us build a delightful & intuitive user experience for our online world.
- Work closely with the founders, PMs, engineers, learning designers & content creators to conceptualize & realize our product vision.
- Get your hands dirty via prototypes, mock-ups, and wireframes.
- Continuously solicit feedback from users & ingrain that in our design process.
- Help create our organization’s design language & drive adoption across teams.
- Grow our team of designers & evangelize our mission to the design community over time!
Must Haves:
- At least 4 years of experience in digital product
design and/or service design.
- Ability to clearly communicate concepts and designs
through sketches, wireframes, high fidelity comps,
and prototypes.
- Ability to work well cross-functionally. This role will
work closely with product, brand and experiences
teams and it’s crucial to effectively foster good
relationships.
- End to end journey thinking: It’s essential to step out
of the details to understand the holistic journey of our
users.
- First principles thinker since we’ll be chartering new
waters in the field of education globally
- Exceptional attention to detail & obsession to
constantly improve design quality
- Have cross-platform design experiences (iOS,
Android, Web, H5 etc.)
Awesome to have:
- 2 years of experience leading a team
of of UI/UX designers
- Experience working on social or
consumer facing products
- Zero to one experiences building
products from concept to launch
- Ability to holistically combine online
and offline experiences
- Diverse graphic capabilities (basic
illustration & motion skills)
About Company
Our client is in the business of delivering cutting-edge software solutions, hardware systems, and IT services. It lends the right technological edge to governments and businesses. This enables them to achieve their organizational objectives efficiently and effectively to scale their businesses to newer heights.
Minimum Requirements/Qualifications:
• Bachelor’s/Master’s degree in computer science, Computer Engineering or a related field is preferred.
• Must have at least 5 to 7 years of experience in application development (Java,Spring Framework), Deployment(Apache Tomcat & Nginx)
• Sound knowledge of Object-Oriented Programming (OOP) Patterns and Concepts.
• Knowledge & hands-on experience of Java, Spring Framework, Spring Security, JSP, Apache Tomcat, Nginx is a must.
• Must have experience with Bootstrap CSS, jQuery etc.
• Basic Understanding of PostgreSQL, MVC (Model-View-Controller) Pattern, JDBC (Java Database Connectivity), and RESTful web services.
• Relevant Knowledge of Java GUI frameworks according to project requirements.
• Experience in handling external and embedded databases.
Roles & Responsibilities
• Develop new modules, patches & Updates/upgrades for an existing application (developed in-house by the client)
• Bug fixing, and updating of software.
• Analyze user requirements to define business objectives.
• Maintain Java-based applications that can be high-volume and low-latency.
• Identify and resolve any technical issues arising.
• Write well-designed, testable code.
• Conducting software analysis, programming, testing, and debugging.
• Support continuous improvement, investigating alternatives and technologies, and presenting for architectural review.
- 4+ years distributed service engineering experience in a software development environment
- Experience driving feature design reviews, documentation, UX reviews, and working with Product Managers through the entire launch process
- Strong development experience in Java, C++, C#, or similar OO languages
- Strong knowledge of data structures, algorithms, operating systems, and distributed systems fundamentals
- Working familiarity with networking protocols (TCP/IP, HTTP) and standard network architectures
- Good understanding of databases, NoSQL systems, storage and distributed persistence technologies
- Experience building multi-tenant, virtualized infrastructure a strong plus
Datametica is looking for talented Big Query engineers
Total Experience - 2+ yrs.
Notice Period – 0 - 30 days
Work Location – Pune, Hyderabad
Job Description:
- Sound understanding of Google Cloud Platform Should have worked on Big Query, Workflow, or Composer
- Experience in migrating to GCP and integration projects on large-scale environments ETL technical design, development, and support
- Good SQL skills and Unix Scripting Programming experience with Python, Java, or Spark would be desirable.
- Experience in SOA and services-based data solutions would be advantageous
About the Company:
www.datametica.com
Datametica is amongst one of the world's leading Cloud and Big Data analytics companies.
Datametica was founded in 2013 and has grown at an accelerated pace within a short span of 8 years. We are providing a broad and capable set of services that encompass a vision of success, driven by innovation and value addition that helps organizations in making strategic decisions influencing business growth.
Datametica is the global leader in migrating Legacy Data Warehouses to the Cloud. Datametica moves Data Warehouses to Cloud faster, at a lower cost, and with few errors, even running in parallel with full data validation for months.
Datametica's specialized team of Data Scientists has implemented award-winning analytical models for use cases involving both unstructured and structured data.
Datametica has earned the highest level of partnership with Google, AWS, and Microsoft, which enables Datametica to deliver successful projects for clients across industry verticals at a global level, with teams deployed in the USA, EU, and APAC.
Recognition:
We are gratified to be recognized as a Top 10 Big Data Global Company by CIO story.
If it excites you, please apply.
Personality fit
Looking to hire a Backend developer, who wakes up every day with the zeal of learning something new - a mind that is always curious. Not only this, he should be proud to have a good experience on algorithms and understand his technology at the atomic level.
Preliminary requirements for the role
- Should understand the architecture of Node Js language, and must have had at least 1 year of solid experience in writing robust code in this language.
- Should be well versed with basic algorithms on ( array, Linked list, stack, queue)- Yes, we do ask questions associated with time and space complexity for the above-mentioned algorithms
- Thorough understanding of MySQL, in that, creating all types of joins ( inner, outer, left, and right) on multiple tables, views, and triggers are a piece of cake for you.
- Further, you should have a decent understanding of database designs so that you can make quick decisions about normalization, inclusion/exclusion of foreign keys, etc
- Should have a decent understanding of code repository tool such as GitHub
- Should be well versed with the basics of AWS ( understanding of cloud architecture would be a plus)
- Should be aware of the commonly used encryption techniques such as JWT, API encryption, Token management, etc
- Exposure to working in E-commerce related applications as well as in a startup environment would be an add on.

Solinas Integrity (www.solinas.in) is a leading water & sanitation robotics start-up founded by IIT Madras Alumni & professors to develop cutting edge solutions to solve the problems in water pipelines and sewer lines\septic tanks, thereby improving the lives of millions of people. Our core values start with trust, and respect for everyone and along with strong collaboration and communication. We believe in giving agency to our teammates and strongly pushing them towards developing a growth mindset.
Duties and Responsibilities:
- To develop and improve signal processing algorithms for analysis of acoustic signals with up-to-date knowledge on processing methods.
- Understand key acoustic algorithm functions, develop efficient code, verify performance and functionality.
- Exposure to all phases of software development life cycle (concept, design, implementation, test, and production).
- Propose innovations to improve performance, quality, etc.
- Work with peers to develop excellent, structured code, well-optimized and easily maintainable.
Basic Qualifications:
● Experience programming in either Python, C++, or MATLAB
● MS/PhD degree in Electrical/Electronics Engineering/ Signal processing
● At least 1 year of signal processing or related area
● Good analytical and problem-solving skills
● Good knowledge of signal processing techniques, basic knowledge of ML algorithms and good visualisation skills.

